#### DMCA

## Iterative Nearest Neighbors for Classification and Dimensionality Reduction

Citations: | 9 - 8 self |

### Citations

2408 | Nonlinear Dimensionality Reduction by Locally Linear Embedding”, Science 290
- Roweis, Saul
(Show Context)
Citation Context ...sentation), Linear Discriminant Analysis (LDA)[7, 13] (enhances the inter-class variance in comparison to the intraclass variance). Examples of non-linear techniques are Locally Linear Embedding (LLE)=-=[14]-=- (preserves the local neighborhood), Laplacian Embedding (LE)[2] (preserves the distances to neighbors), or Sparse Representation Embedding (SRE)[15] (preserves the sparse representation). In the cont... |

933 | Robust face recognition via sparse representation
- Wright, Yang, et al.
- 2009
(Show Context)
Citation Context ...the query. ‘Nearest’ is defined on the basis of some similarity, distance, or metric. The Sparse Representation-based Classifier (SRC) proved to yield state-of-the-art performance in face recognition =-=[18]-=-. This classifier starts from a Sparse Representation (SR) in the l1-regularized least squares sense. It then decides based on the class labels of the samples that contribute to the representation of ... |

663 | Laplacian eigenmaps and spectral techniques for embedding and clustering
- Belkin
- 2002
(Show Context)
Citation Context ...he inter-class variance in comparison to the intraclass variance). Examples of non-linear techniques are Locally Linear Embedding (LLE)[14] (preserves the local neighborhood), Laplacian Embedding (LE)=-=[2]-=- (preserves the distances to neighbors), or Sparse Representation Embedding (SRE)[15] (preserves the sparse representation). In the context of classification, we define the criteria / properties to ma... |

645 | The pascal visual object classes challenge
- Everingham, Gool, et al.
- 2010
(Show Context)
Citation Context ...ns with the simplicity and speed of kNN. The definition Table 4. Image classification results on PASCAL VOC 2007 benchmark object class aero bicyc bird boat bottle bus car cat chair cow Best of VOC’07=-=[6]-=- 77.5 63.6 56.1 71.9 33.1 60.6 78.0 58.8 53.5 42.6 LLC [17] 74.8 65.2 50.7 70.9 28.7 68.8 78.5 61.7 54.3 48.6 NBNN 70.7 59.3 40.3 47.4 23.0 57.4 74.3 50.7 42.2 32.9 NBSRl1 67.9 63.6 47.0 50.5 22.4 57.... |

492 | Linear Spatial Pyramid Matching Using Sparse Coding for Image Classification
- Yang, Yu, et al.
- 2009
(Show Context)
Citation Context ...sample, xi. The class ĉ has the largest impact in the representation of the query sample q. This is the INN-based Classifier decision as used in our experiments. 5.3. INN Spatial Pyramid Matching In =-=[20]-=-, the Sparse Coding Spatial Pyramid Matching method has been proposed for image classification. The sparse coding, seen as a ‘soft assignment’ over the learned dictionary, uses the l1-regularized leas... |

469 | PCA versus LDA
- Martinez, Kak
- 2001
(Show Context)
Citation Context ...iance), Locality Preserving Projections (LPP)[9] (conserves the local similarities), Sparse Representation Linear Projections[15] (embeds the sparse representation), Linear Discriminant Analysis (LDA)=-=[7, 13]-=- (enhances the inter-class variance in comparison to the intraclass variance). Examples of non-linear techniques are Locally Linear Embedding (LLE)[14] (preserves the local neighborhood), Laplacian Em... |

441 | Efficient sparse coding algorithms
- Lee, Battle, et al.
(Show Context)
Citation Context ...onal to stabilize the solution; this is the regularization parameter for CRC. For the SRC classifier we used various l1-regularized least squares solvers: L1LS [11], Homotopy [5, 1], and Feature Sign =-=[12]-=-. The reader is referred to [19] for a comparison study of l1-solvers. The l1-solvers tackle the following problem: ŵ = argmin w ‖q−Xw‖2 + λ‖w‖1 (18) We use a tolerance of 0.05, for the FeatureSign, ... |

437 | Locality-constrained linear coding for image classification
- Wang, Yang, et al.
(Show Context)
Citation Context ...d performance over the original ScSPM [20] based on solving l1-regularized least squares, and having comparable speed and better performance when compared with Locality-Constraint Linear Coding (LLC) =-=[17]-=-, acknowledged to be one of the fastest and most strongly performing image classification frameworks, when run on PASCAL VOC 2007. Moreover, INN has desirable properties, such as: (i) steep convergenc... |

414 | Locality preserving projections.
- He, Niyogi
- 2003
(Show Context)
Citation Context ...ought to be preserved or enhanced under the reducing projection. Examples of linear techniques are Principal Component Analysis (PCA)[22] (enhances the variance), Locality Preserving Projections (LPP)=-=[9]-=- (conserves the local similarities), Sparse Representation Linear Projections[15] (embeds the sparse representation), Linear Discriminant Analysis (LDA)[7, 13] (enhances the inter-class variance in co... |

290 | An interior-point method for large-scale l1-regularized logistic regression
- Koh, Kim, et al.
- 2007
(Show Context)
Citation Context ...ix. A small value λ is added to the diagonal to stabilize the solution; this is the regularization parameter for CRC. For the SRC classifier we used various l1-regularized least squares solvers: L1LS =-=[11]-=-, Homotopy [5, 1], and Feature Sign [12]. The reader is referred to [19] for a comparison study of l1-solvers. The l1-solvers tackle the following problem: ŵ = argmin w ‖q−Xw‖2 + λ‖w‖1 (18) We use a ... |

275 | Sparse principal component analysis
- Zou, Hastie, et al.
(Show Context)
Citation Context ...e for dimensionality reduction techniques often is a property that ought to be preserved or enhanced under the reducing projection. Examples of linear techniques are Principal Component Analysis (PCA)=-=[22]-=- (enhances the variance), Locality Preserving Projections (LPP)[9] (conserves the local similarities), Sparse Representation Linear Projections[15] (embeds the sparse representation), Linear Discrimin... |

265 | In defense of nearest-neighbor based image classification
- Boiman, Shechtman, et al.
(Show Context)
Citation Context ...entation from the non-linear Sparse Representation-based Embedding [15] or other graph based methods to derive INN-based variants. 5. Classification 5.1. INN-based Naive Bayes Image Classification In =-=[4]-=-, the powerful Naive Bayes Nearest Neighbor (NBNN) image classifier has been proposed. To a certain extent, it is a learning-free and parameter-free method, it avoids discretization, and it generalize... |

133 |
The Statistical Utilization of Multiple Measurements
- Fisher
- 1938
(Show Context)
Citation Context ...iance), Locality Preserving Projections (LPP)[9] (conserves the local similarities), Sparse Representation Linear Projections[15] (embeds the sparse representation), Linear Discriminant Analysis (LDA)=-=[7, 13]-=- (enhances the inter-class variance in comparison to the intraclass variance). Examples of non-linear techniques are Locally Linear Embedding (LLE)[14] (preserves the local neighborhood), Laplacian Em... |

107 | Sparse representation or collaborative representation: Which helps face recognition
- Zhang, Yang, et al.
(Show Context)
Citation Context ...-regularized least squares sense. It then decides based on the class labels of the samples that contribute to the representation of the query sample. The Collaborative Representation Classifier (CRC) =-=[21]-=- starts instead from the l2-regularized least squares solution. Of course, apart from performance, there also is the computational efficiency to consider. Whereas SR and CR may excel in terms of the f... |

69 | Kernel descriptors for visual recognition.
- Bo, Ren, et al.
- 2010
(Show Context)
Citation Context ...89.66 86.30 POLYSVM 92.76 92.51 92.49 RBFSVM 92.43 92.28 92.46 Table 3. Performance Comparison on Scene-15 Learned? Train/test split 100/100 Method avg CR yes ScSPM+SIFT [20] 80.28± 0.93 yes EMK+KDES =-=[3]-=- 87.5 yes LScSPM+SIFT [8] 89.75±0.5 yes NBNN&BoF kernels+SIFT [16] 85± 4 yes NBNN-f2+SIFT [16] 75± 3 yes INNSPM+SIFT 81.4± 1 no NBNN+NIMBLE 74.2± 1 no NBSRl1+NIMBLE 77.75± 1 no NBINN+NIMBLE 78.23±1 an... |

68 |
Fast solution of l-1 norm minimization problems when the solution may be sparse
- Donoho, Tsaig
(Show Context)
Citation Context ...ue λ is added to the diagonal to stabilize the solution; this is the regularization parameter for CRC. For the SRC classifier we used various l1-regularized least squares solvers: L1LS [11], Homotopy =-=[5, 1]-=-, and Feature Sign [12]. The reader is referred to [19] for a comparison study of l1-solvers. The l1-solvers tackle the following problem: ŵ = argmin w ‖q−Xw‖2 + λ‖w‖1 (18) We use a tolerance of 0.05... |

46 | Local features are not lonely laplacian sparse coding for image classification.
- Gao, Tsang, et al.
- 2010
(Show Context)
Citation Context ... 92.51 92.49 RBFSVM 92.43 92.28 92.46 Table 3. Performance Comparison on Scene-15 Learned? Train/test split 100/100 Method avg CR yes ScSPM+SIFT [20] 80.28± 0.93 yes EMK+KDES [3] 87.5 yes LScSPM+SIFT =-=[8]-=- 89.75±0.5 yes NBNN&BoF kernels+SIFT [16] 85± 4 yes NBNN-f2+SIFT [16] 75± 3 yes INNSPM+SIFT 81.4± 1 no NBNN+NIMBLE 74.2± 1 no NBSRl1+NIMBLE 77.75± 1 no NBINN+NIMBLE 78.23±1 and INNSPM with λ = 0.1. Fo... |

45 |
Fast l1minimization algorithms and an application in robust face recognition: a review
- Yang, Ganesh, et al.
- 2010
(Show Context)
Citation Context ...this is the regularization parameter for CRC. For the SRC classifier we used various l1-regularized least squares solvers: L1LS [11], Homotopy [5, 1], and Feature Sign [12]. The reader is referred to =-=[19]-=- for a comparison study of l1-solvers. The l1-solvers tackle the following problem: ŵ = argmin w ‖q−Xw‖2 + λ‖w‖1 (18) We use a tolerance of 0.05, for the FeatureSign, L1LS, and Homotopy solvers, as i... |

43 | Robust classification of objects, faces, and flowers using natural image statistics
- Kanan, Cottrell
(Show Context)
Citation Context ...dard NBNN, the recently proposed NBSRl1 , and the original ScSPM. For this purpose we are using the Scene-15 dataset and PASCAL VOC 2007 benchmark. We keep the same setup for feature extraction as in =-=[10]-=-. The NIMBLE features are extracted using 100 (eye) fixations per image, and are PCA-projected into a 500-dimensional space. For NBSRl1 we use the FeatureSign solver with λ = 0.3 for the sake of speed... |

33 | The NBNN kernel
- Tuytelaars, Fritz, et al.
- 2011
(Show Context)
Citation Context ...ble 3. Performance Comparison on Scene-15 Learned? Train/test split 100/100 Method avg CR yes ScSPM+SIFT [20] 80.28± 0.93 yes EMK+KDES [3] 87.5 yes LScSPM+SIFT [8] 89.75±0.5 yes NBNN&BoF kernels+SIFT =-=[16]-=- 85± 4 yes NBNN-f2+SIFT [16] 75± 3 yes INNSPM+SIFT 81.4± 1 no NBNN+NIMBLE 74.2± 1 no NBSRl1+NIMBLE 77.75± 1 no NBINN+NIMBLE 78.23±1 and INNSPM with λ = 0.1. For Scene-15 we report the results in Table... |

14 | Primal Dual Pursuit: A Homotopy Based Algorithm for the Dantzig Selector
- Asif
- 2008
(Show Context)
Citation Context ...ue λ is added to the diagonal to stabilize the solution; this is the regularization parameter for CRC. For the SRC classifier we used various l1-regularized least squares solvers: L1LS [11], Homotopy =-=[5, 1]-=-, and Feature Sign [12]. The reader is referred to [19] for a comparison study of l1-solvers. The l1-solvers tackle the following problem: ŵ = argmin w ‖q−Xw‖2 + λ‖w‖1 (18) We use a tolerance of 0.05... |

12 |
Gool, Sparse representation based projections
- Timofte, Van
(Show Context)
Citation Context ...ear techniques are Principal Component Analysis (PCA)[22] (enhances the variance), Locality Preserving Projections (LPP)[9] (conserves the local similarities), Sparse Representation Linear Projections=-=[15]-=- (embeds the sparse representation), Linear Discriminant Analysis (LDA)[7, 13] (enhances the inter-class variance in comparison to the intraclass variance). Examples of non-linear techniques are Local... |