Results 1 
3 of
3
S.: Sequential spectral learning to hash with multiple representations
, 2012
"... Abstract. Learning to hash involves learning hash functions from a set of images for embedding highdimensional visual descriptors into a similaritypreserving lowdimensional Hamming space. Most of existing methods resort to a single representation of images, that is, only one type of visual descri ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Learning to hash involves learning hash functions from a set of images for embedding highdimensional visual descriptors into a similaritypreserving lowdimensional Hamming space. Most of existing methods resort to a single representation of images, that is, only one type of visual descriptors is used to learn a hash function to assign binary codes to images. However, images are often described by multiple different visual descriptors (such as SIFT, GIST, HOG), so it is desirable to incorporate these multiple representations into learning a hash function, leading to multiview hashing. In this paper we present a sequential spectral learning approach to multiview hashing where a hash function is sequentially determined by solving the successive maximization of local variances subject to decorrelation constraints. We compute multiview local variances by αaveraging viewspecific distance matrices such that the best averaged distance matrix is determined by minimizing its αdivergence from viewspecific distance matrices. We also present a scalable implementation, exploiting a fast approximate kNN graph construction method, in which αaveraged distances computed in small partitions determined by recursive spectral bisection are gradually merged in conquer steps until whole examples are used. Numerical experiments on Caltech256, CIFAR20, and NUSWIDE datasets confirm the high performance of our method, in comparison to singleview spectral hashing as well as existing multiview hashing methods. 1
Deep Learning to Hash with Multiple Representations
"... Abstract—Hashing seeks an embedding of highdimensional objects into a similaritypreserving lowdimensional Hamming space such that similar objects are indexed by binary codes with small Hamming distances. A variety of hashing methods have been developed, but most of them resort to a single view (r ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract—Hashing seeks an embedding of highdimensional objects into a similaritypreserving lowdimensional Hamming space such that similar objects are indexed by binary codes with small Hamming distances. A variety of hashing methods have been developed, but most of them resort to a single view (representation) of data. However, objects are often described by multiple representations. For instance, images are described by a few different visual descriptors (such as SIFT, GIST, and HOG), so it is desirable to incorporate multiple representations into hashing, leading to multiview hashing. In this paper we present a deep network for multiview hashing, referred to as deep multiview hashing, where each layer of hidden nodes is composed of viewspecific and shared hidden nodes, in order to learn individual and shared hidden spaces from multiple views of data. Numerical experiments on image datasets demonstrate the useful behavior of our deep multiview hashing (DMVH), compared to recentlyproposed multimodal deep network as well as existing shallow models of hashing. Keywordsdeep learning; harmonium; hashing; multiview learning; restricted Boltzmann machines; I.
Hashing with Generalized Nyström Approximation
"... Abstract—Hashing, which involves learning binary codes to embed highdimensional data into a similaritypreserving lowdimensional Hamming space, is often formulated as linear dimensionality reduction followed by binary quantization. Linear dimensionality reduction, based on maximum variance formula ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—Hashing, which involves learning binary codes to embed highdimensional data into a similaritypreserving lowdimensional Hamming space, is often formulated as linear dimensionality reduction followed by binary quantization. Linear dimensionality reduction, based on maximum variance formulation, requires leading eigenvectors of data covariance or graph Laplacian matrix. Computing leading singular vectors or eigenvectors in the case of highdimension and large sample size, is a main bottleneck in most of datadriven hashing methods. In this paper we address the use of generalized Nyström method where a subset of rows and columns are used to approximately compute leading singular vectors of the data matrix, in order to improve the scalability of hashing methods in the case of highdimensional data with large sample size. Especially we validate the useful behavior of generalized Nyström approximation with uniform sampling, in the case of a recentlydeveloped hashing method based on principal component analysis (PCA) followed by an iterative quantization, referred to as PCA+ITQ, developed by Gong and Lazebnik. We compare the performance of generalized Nyström approximation with uniform and nonuniform sampling, to the full singular value decomposition (SVD) method, confirming that the uniform sampling improves the computational and space complexities dramatically, while the performance is not much sacrificed. In addition we present lowrank approximation error bounds for generalized Nyström approximation with uniform sampling, which is not a trivial extension of available results on the nonuniform sampling case. KeywordsCUR decomposition; hashing; generalized Nyström approximation; pseudoskeleton approximation; uniform sampling; I.