Results 1  10
of
105
Graph embedding and extension: A general framework for dimensionality reduction
 IEEE TRANS. PATTERN ANAL. MACH. INTELL
, 2007
"... Over the past few decades, a large family of algorithms—supervised or unsupervised; stemming from statistics or geometry theory—has been designed to provide different solutions to the problem of dimensionality reduction. Despite the different motivations of these algorithms, we present in this paper ..."
Abstract

Cited by 258 (30 self)
 Add to MetaCart
(Show Context)
Over the past few decades, a large family of algorithms—supervised or unsupervised; stemming from statistics or geometry theory—has been designed to provide different solutions to the problem of dimensionality reduction. Despite the different motivations of these algorithms, we present in this paper a general formulation known as graph embedding to unify them within a common framework. In graph embedding, each algorithm can be considered as the direct graph embedding or its linear/kernel/tensor extension of a specific intrinsic graph that describes certain desired statistical or geometric properties of a data set, with constraints from scale normalization or a penalty graph that characterizes a statistical or geometric property that should be avoided. Furthermore, the graph embedding framework can be used as a general platform for developing new dimensionality reduction algorithms. By utilizing this framework as a tool, we propose a new supervised dimensionality reduction algorithm called Marginal Fisher Analysis in which the intrinsic graph characterizes the intraclass compactness and connects each data point with its neighboring points of the same class, while the penalty graph connects the marginal points and characterizes the interclass separability. We show that MFA effectively overcomes the limitations of the traditional Linear Discriminant Analysis algorithm due to data distribution assumptions and available projection directions. Real face recognition experiments show the superiority of our proposed MFA in comparison to LDA, also for corresponding kernel and tensor extensions.
Beyond streams and graphs: Dynamic tensor analysis
 In KDD
, 2006
"... How do we find patterns in authorkeyword associations, evolving over time? Or in DataCubes, with productbranchcustomer sales information? Matrix decompositions, like principal component analysis (PCA) and variants, are invaluable tools for mining, dimensionality reduction, feature selection, rule ..."
Abstract

Cited by 111 (16 self)
 Add to MetaCart
(Show Context)
How do we find patterns in authorkeyword associations, evolving over time? Or in DataCubes, with productbranchcustomer sales information? Matrix decompositions, like principal component analysis (PCA) and variants, are invaluable tools for mining, dimensionality reduction, feature selection, rule identification in numerous settings like streaming data, text, graphs, social networks and many more. However, they have only two orders, like author and keyword, in the above example. We propose to envision such higher order data as tensors, and tap the vast literature on the topic. However, these methods do not necessarily scale up, let alone operate on semiinfinite streams. Thus, we introduce the dynamic tensor analysis (DTA) method, and its variants. DTA provides a compact summary for highorder and highdimensional data, and it also reveals the hidden correlations. Algorithmically, we designed DTA very carefully so that it is (a) scalable, (b) space efficient (it does not need to store the past) and (c) fully automatic with no need for user defined parameters. Moreover, we propose STA, a streaming tensor analysis method, which provides a fast, streaming approximation to DTA. We implemented all our methods, and applied them in two real settings, namely, anomaly detection and multiway latent semantic indexing. We used two real, large datasets, one on network flow data (100GB over 1 month) and one from DBLP (200MB over 25 years). Our experiments show that our methods are fast, accurate and that they find interesting patterns and outliers on the real datasets. 1.
Multilinear principal component analysis of tensor objects for recognition
 in Proc. Int. Conf. Pattern Recognit
, 2006
"... Abstract—This paper introduces a multilinear principal component analysis (MPCA) framework for tensor object feature extraction. Objects of interest in many computer vision and pattern recognition applications, such as 2D/3D images and video sequences are naturally described as tensors or multilin ..."
Abstract

Cited by 84 (15 self)
 Add to MetaCart
(Show Context)
Abstract—This paper introduces a multilinear principal component analysis (MPCA) framework for tensor object feature extraction. Objects of interest in many computer vision and pattern recognition applications, such as 2D/3D images and video sequences are naturally described as tensors or multilinear arrays. The proposed framework performs feature extraction by determining a multilinear projection that captures most of the original tensorial input variation. The solution is iterative in nature and it proceeds by decomposing the original problem to a series of multiple projection subproblems. As part of this work, methods for subspace dimensionality determination are proposed and analyzed. It is shown that the MPCA framework discussed in this work supplants existing heterogeneous solutions such as the classical principal component analysis (PCA) and its 2D variant (2D PCA). Finally, a tensor object recognition system is proposed with the introduction of a discriminative tensor feature selection mechanism and a novel classification strategy, and applied to the problem of gait recognition. Results presented here indicate MPCA’s utility as a feature extraction tool. It is shown that even without a fully optimized design, an MPCAbased gait recognition module achieves highly competitive performance and compares favorably to the stateoftheart gait recognizers. Index Terms—Dimensionality reduction, feature extraction, gait recognition, multilinear principal component analysis (MPCA), tensor objects. I.
Discriminant locally linear embedding with highorder tensor data
 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, PART B: CYBERNETICS
, 2008
"... Graphembedding along with its linearization and kernelization provides a general framework that unifies most traditional dimensionality reduction algorithms. From this framework, we propose a new manifold learning technique called discriminant locally linear embedding (DLLE), in which the local ge ..."
Abstract

Cited by 44 (12 self)
 Add to MetaCart
Graphembedding along with its linearization and kernelization provides a general framework that unifies most traditional dimensionality reduction algorithms. From this framework, we propose a new manifold learning technique called discriminant locally linear embedding (DLLE), in which the local geometric properties within each class are preserved according to the locally linear embedding (LLE) criterion, and the separability between different classes is enforced by maximizing margins between point pairs on different classes. To deal with the outofsample problem in visual recognition with vector input, the linear version of DLLE, i.e., linearization of DLLE (DLLE/L), is directly proposed through the graphembedding framework. Moreover, we propose its multilinear version, i.e., tensorization of DLLE, for the outofsample problem with highorder tensor input. Based on DLLE, a procedure for gait recognition is described. We conduct comprehensive experiments on both gait and face recognition, and observe that: 1) DLLE along its linearization and tensorization outperforms the related versions of linear discriminant analysis, and DLLE/L demonstrates greater effectiveness than the linearization of LLE; 2) algorithms based on tensor representations are generally superior to linear algorithms when dealing with intrinsically highorder data; and 3) for human gait recognition, DLLE/L generally obtains higher accuracy than stateoftheart gait recognition algorithms on the standard University of South Florida gait database.
GPCA: An Efficient Dimension Reduction Scheme for Image Compression and Retrieval
, 2004
"... Recent years have witnessed a dramatic increase in the quantity of image data collected, due to advances in fields such as medical imaging, reconnaissance, surveillance, astronomy, multimedia etc. With this increase has come the need to be able to store, transmit, and query large volumes of image da ..."
Abstract

Cited by 36 (2 self)
 Add to MetaCart
Recent years have witnessed a dramatic increase in the quantity of image data collected, due to advances in fields such as medical imaging, reconnaissance, surveillance, astronomy, multimedia etc. With this increase has come the need to be able to store, transmit, and query large volumes of image data efficiently. A common operation on image databases is the retrieval of all images that are similar to a query image. For this, the images in the database are often represented as vectors in a highdimensional space and a query is answered by retrieving all image vectors that are proximal to the query image in this space, under a suitable similarity metric. To overcome problems associated with high dimensionality, such as high storage and retrieval times, a dimension reduction step is usually applied to the vectors to concentrate relevant information in a small number of dimensions. Principal Component Analysis (PCA) is a wellknown dimension reduction scheme. However, since it works with vectorized representations of images, PCA does not take into account the spatial locality of pixels in images. In this paper, a new dimension reduction scheme, called Generalized Principal Component Analysis (GPCA), is presented. This scheme works directly with images in their native state, as twodimensional matrices, by projecting the images to a vector space that is the tensor product of two lowerdimensional vector spaces. Experiments on databases of face images show that, for the same amount of storage, GPCA is superior to PCA in terms of quality of the compressed images, query precision, and computational cost.
Outofcore tensor approximation of multidimensional matrices of visual data
 ACM Transactions on Graphics
, 2005
"... Tensor approximation is necessary to obtain compact multilinear models for multidimensional visual datasets. Traditionally, each multidimensional data item is represented as a vector. Such a scheme flattens the data and partially destroys the internal structures established throughout the multiple ..."
Abstract

Cited by 35 (5 self)
 Add to MetaCart
Tensor approximation is necessary to obtain compact multilinear models for multidimensional visual datasets. Traditionally, each multidimensional data item is represented as a vector. Such a scheme flattens the data and partially destroys the internal structures established throughout the multiple dimensions. In this paper, we retain the original dimensionality of the data items to more effectively exploit existing spatial redundancy and allow more efficient computation. Since the size of visual datasets can easily exceed the memory capacity of a single machine, we also present an outofcore algorithm for higherorder tensor approximation. The basic idea is to partition a tensor into smaller blocks and perform tensorrelated operations blockwise. We have successfully applied our techniques to three graphicsrelated datadriven models, including 6D bidirectional texture functions, 7D dynamic BTFs and 4D volume simulation sequences. Experimental results indicate that our techniques can not only process outofcore data, but also achieve higher compression ratios and quality than previous methods.
RankR Approximation of Tensors Using ImageasMatrix Representation
 in Proc. CVPR’05
, 2005
"... We present a novel multilinear algebra based approach for reduced dimensionality representation of image ensembles. We treat an image as a matrix, instead of a vector as in traditional dimensionality reduction techniques like PCA, and higherdimensional data as a tensor. This helps exploit spatiote ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
(Show Context)
We present a novel multilinear algebra based approach for reduced dimensionality representation of image ensembles. We treat an image as a matrix, instead of a vector as in traditional dimensionality reduction techniques like PCA, and higherdimensional data as a tensor. This helps exploit spatiotemporal redundancies with less information loss than imageasvector methods. The challenges lie in the computational and memory requirements for large ensembles. Currently, there exists a rankR approximation algorithm which, although applicable to any number of dimensions, is efficient for only lowrank approximations. For larger dimensionality reductions, the memory and time costs of this algorithm become prohibitive. We propose a novel algorithm for rankR approximations of thirdorder tensors, which is e#cient for arbitrary R but for the important special case of 2D image ensembles, e.g. video. Both of these algorithms reduce redundancies present in all dimensions. RankR tensor approximation yields the most compact data representation among all known imageasmatrix methods. We evaluated the performance of our algorithm vs. other approaches on a number of datasets with the following two main results. First, for a fixed compression ratio, the proposed algorithm yields the best representation of image ensembles visually as well as in the least squares sense. Second, proposed representation gives the best performance for object classification.
Human gait recognition with matrix representation
 2006) 896–903. ARTICLE IN PRESS
, 2009
"... Abstract—Human gait is an important biometric feature. It can be perceived from a great distance and has recently attracted greater attention in videosurveillancerelated applications, such as closedcircuit television. We explore gait recognition based on a matrix representation in this paper. Fir ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
Abstract—Human gait is an important biometric feature. It can be perceived from a great distance and has recently attracted greater attention in videosurveillancerelated applications, such as closedcircuit television. We explore gait recognition based on a matrix representation in this paper. First, binary silhouettes over one gait cycle are averaged. As a result, each gait video sequence, containing a number of gait cycles, is represented by a series of graylevel averaged images. Then, a matrixbased unsupervised algorithm, namely coupled subspace analysis (CSA), is employed as a preprocessing step to remove noise and retain the most representative information. Finally, a supervised algorithm, namely discriminant analysis with tensor representation, is applied to further improve classification ability. This matrixbased scheme demonstrates a much better gait recognition performance than stateoftheart algorithms on the standard USF HumanID Gait database. Index Terms—Coupled subspaces analysis (CSA), dimensionality reduction, discriminant analysis with tensor representation (DATER), human gait recognition, object representation. I.
Twodimensional singular value decomposition (2dsvd) for 2d maps and images
 SIAM Int’l Conf. Data Mining
, 2005
"... For a set of 1D vectors, standard singular value decomposition (SVD) is frequently applied. For a set of 2D objects such as images or weather maps, we form 2DSVD, which computes principal eigenvectors of rowrow and columncolumn covariance matrices, exactly as in the standard SVD. We study optimalit ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
(Show Context)
For a set of 1D vectors, standard singular value decomposition (SVD) is frequently applied. For a set of 2D objects such as images or weather maps, we form 2DSVD, which computes principal eigenvectors of rowrow and columncolumn covariance matrices, exactly as in the standard SVD. We study optimality properties of 2DSVD as lowrank approximation and show that it provides a framework unifying two recent approaches. Experiments on images and weather maps illustrate the usefulness of 2DSVD. 1
Semisupervised bilinear subspace learning
 IEEE Trans. Image Process
, 2009
"... Abstract—Recent research has demonstrated the success of tensor based subspace learning in both unsupervised and supervised configurations (e.g., 2D PCA, 2D LDA, and DATER). In this correspondence, we present a new semisupervised subspace learning algorithm by integrating the tensor representatio ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
Abstract—Recent research has demonstrated the success of tensor based subspace learning in both unsupervised and supervised configurations (e.g., 2D PCA, 2D LDA, and DATER). In this correspondence, we present a new semisupervised subspace learning algorithm by integrating the tensor representation and the complementary information conveyed by unlabeled data. Conventional semisupervised algorithms mostly impose a regularization term based on the data representation in the original feature space. Instead, we utilize graph Laplacian regularization based on the lowdimensional feature space. An iterative algorithm, referred to as adaptive regularization based semisupervised discriminant analysis with tensor representation (ARSDA/T), is also developed to compute the solution. In addition to handling tensor data, a vectorbased variant (ARSDA/V) is also presented, in which the tensor data are converted into vectors before subspace learning. Comprehensive experiments on the CMU PIE and YALEB databases demonstrate that ARSDA/T brings significant improvement in face recognition accuracy over both conventional supervised and semisupervised subspace learning algorithms. Index Terms—Adaptive regularization, dimensionality reduction, face recognition, semisupervised learning. I.