#### DMCA

## Face recognition using laplacianfaces (2005)

### Cached

### Download Links

Venue: | IEEE Transactions on Pattern Analysis and Machine Intelligence |

Citations: | 379 - 39 self |

### Citations

3722 | Normalized cuts and image segmentation - Shi, Malik - 1997 |

2420 |
A Global Geometric Framework for Nonlinear Dimensionality
- Tenenbaum, Silva, et al.
- 2000
(Show Context)
Citation Context ...o that PCA is less sensitive to different training datasets. 2sRecently, a number of research efforts have shown that the face images possibly reside on a nonlinear submanifold [7][10][18][19][21][23]=-=[27]-=-. However, both PCA and LDA effectively see only the Euclidean structure. They fail to discover the underlying structure, if the face images lie on a nonlinear submanifold hidden in the image space. S... |

2251 | Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection
- Belhumeur, Hespanha, et al.
- 1997
(Show Context)
Citation Context ...actice, however, these n×m dimensional spaces are too large to allow robust and fast face recognition. A common way to attempt to resolve this problem is to use dimensionality reduction techniques [1]=-=[2]-=-[8][11][12][14][22][26][28][32][35]. Two of the most popular techniques for this purpose are Principal Component Analysis (PCA) [28] and Linear Discriminant Analysis (LDA) [2]. PCA is an eigenvector m... |

1491 |
Spectral Graph Theory
- Chung
- 1997
(Show Context)
Citation Context ... w T XðD SÞX T w w T XLX T w; where X x1; x2; ...; xnŠ, and D is a diagonal matrix; its entries are column (or row since S is symmetric) sums of S;Dii P j Sji. L D S is the Laplacian matrix =-=[6]-=-. Matrix D provides a natural measure on the data points. The bigger the value Dii (corresponding to yi) is, the more “important” is yi. Therefore, we impose a constraint as follows: y T Dy 1 ) w T ... |

1345 |
Face recognition using eigenfaces
- Turk, Pentland
- 1991
(Show Context)
Citation Context ... INTRODUCTION Many face recognition techniques have been developed over the past few decades. One of the most successful and well-studied techniques to face recognition is the appearance-based method =-=[28]-=-[16]. When using appearance-based methods, we usually represent an image of size n×m pixels by a vector in an n×m dimensional space. In practice, however, these n×m dimensional spaces are too large to... |

1091 |
Visual Learning and Recognition of 3-D Objects from Appearance
- Murase, Nayar
- 1995
(Show Context)
Citation Context ...RODUCTION Many face recognition techniques have been developed over the past few decades. One of the most successful and well-studied techniques to face recognition is the appearance-based method [28]=-=[16]-=-. When using appearance-based methods, we usually represent an image of size n×m pixels by a vector in an n×m dimensional space. In practice, however, these n×m dimensional spaces are too large to all... |

696 | Probabilistic Visual Learning for Object Representation
- Moghaddam, Pentland
- 1997
(Show Context)
Citation Context ...ing is done. Figure 5 shows an example of the original face image and the cropped image. Different pattern classifiers have been applied for face recognition, including nearest-neighbor [2], Bayesian =-=[15]-=-, Support Vector Machine [17], etc. In this paper, we apply the nearest-neighbor classifier for its simplicity. In short, the recognition process has three steps. First, we calculate the Laplacianface... |

652 | Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering
- Belkin, Niyogi
- 2001
(Show Context)
Citation Context ...nlinear submanifold hidden in the image space. Some nonlinear techniques have been proposed to discover the nonlinear structure of the manifold, e.g. Isomap [27], LLE [18][20], and Laplacian Eigenmap =-=[3]-=-. These nonlinear methods do yield impressive results on some benchmark artificial data sets. However, they yield maps that are defined only on the training data points and how to evaluate the maps on... |

592 |
A low-dimensional procedure for the characterization of human face
- Sirovich, Kirby
- 1987
(Show Context)
Citation Context ... n×m dimensional spaces are too large to allow robust and fast face recognition. A common way to attempt to resolve this problem is to use dimensionality reduction techniques [1][2][8][11][12][14][22]=-=[26]-=-[28][32][35]. Two of the most popular techniques for this purpose are Principal Component Analysis (PCA) [28] and Linear Discriminant Analysis (LDA) [2]. PCA is an eigenvector method designed to model... |

460 | PCA versus LDA
- Martínez, Kak
(Show Context)
Citation Context ... m-dimensional spaces are too large to allow robust and fast face recognition. A common way to attempt to resolve this problem is to use dimensionality reduction techniques [1], [2], [8], [11], [12], =-=[14]-=-, [22], [26], [28], [34], [37]. Two of the most popular techniques for this purpose are Principal Component Analysis (PCA) [28] and Linear Discriminant Analysis (LDA) [2]. PCA is an eigenvector method... |

403 | Locality Preserving Projections
- He, Niyogi
(Show Context)
Citation Context ...be specific, the manifold structure is modeled by a nearest-neighbor graph which preserves the local structure of the image space. A face subspace is obtained by Locality Preserving Projections (LPP) =-=[9]-=-. Each face image in the image space is mapped to a low-dimensional face 0162-8828/05/$20.00 ß 2005 IEEE Published by the IEEE Computer Societys2 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTE... |

380 | Think globally, fit locally: unsupervised learning of low dimensional manifolds
- Saul, Roweis, et al.
(Show Context)
Citation Context ...f the face images lie on a nonlinear submanifold hidden in the image space. Some nonlinear techniques have been proposed to discover the nonlinear structure of the manifold, e.g. Isomap [27], LLE [18]=-=[20]-=-, and Laplacian Eigenmap [3]. These nonlinear methods do yield impressive results on some benchmark artificial data sets. However, they yield maps that are defined only on the training data points and... |

342 | Face recognition by elastic bunch graph matching
- Wiskott, Fellous, et al.
- 1997
(Show Context)
Citation Context ... somehow similar to Fisherfaces. 21sFigure 5. The original face image and the cropped image. 7.2 Face Recognition Using Laplacianfaces Once the Laplacianfaces are created, face recognition [2][14][28]=-=[29]-=- becomes a pattern classification task. In this section, we investigate the performance of our proposed Laplacianfaces method for face recognition. The system performance is compared with the Eigenfac... |

254 | Principal manifolds and nonlinear dimensionality reduction via tangent space alignment
- Zhang, Zha
- 2005
(Show Context)
Citation Context ...mensionality of the nonlinear face manifold, or, degrees of freedom. We know that the dimensionality of the manifold is equal to the dimensionality of the local tangent space. Some previous works [33]=-=[34]-=- show that the local tangent space can be approximated using points in a neighbor set. Therefore, one possibility is to estimate the dimensionality of the tangent space. Another possible extension of ... |

204 | Charting a manifold
- Brand
- 2002
(Show Context)
Citation Context ...r, they yield maps that are defined only on the training data points and how to evaluate the maps on novel test data points remains unclear. Therefore, these nonlinear manifold learning techniques [3]=-=[5]-=-[18][20] [27][33] might not be suitable for some computer vision tasks, such as face recognition. In the meantime, there has been some interest in the problem of developing low dimensional representat... |

182 |
Kernel eigenfaces vs. kernel Fisherfaces: Face recognition using kernel methods
- Yang
- 2002
(Show Context)
Citation Context ... tasks, such as face recognition. In the meantime, there has been some interest in the problem of developing low-dimensional representations through kernel based techniques for face recognition [13], =-=[33]-=-. These methods can discover the nonlinear structure of the face images. However, they are computationally expensive. Moreover, none of them explicitly considers the structure of the manifold on which... |

173 | Video-based face recognition using probabilistic appearance manifolds
- Ho, Yang, et al.
- 2003
(Show Context)
Citation Context ...nuous curve in image space since there is only one degree of freedom, viz. the angel of rotation. Thus, we can say that the set of face images are intrinsically onedimensional. Many recent works [7], =-=[10]-=-, [18], [19], [21], [23], [27] have shown that the face images do reside on a lowdimensional submanifold embedded in a high-dimensional ambient space (image space). Therefore, an effective subspace le... |

133 | Subspace linear discriminant analysis for face recognition
- Zhao, Chellappa, et al.
- 1999
(Show Context)
Citation Context ...onal spaces are too large to allow robust and fast face recognition. A common way to attempt to resolve this problem is to use dimensionality reduction techniques [1][2][8][11][12][14][22][26][28][32]=-=[35]-=-. Two of the most popular techniques for this purpose are Principal Component Analysis (PCA) [28] and Linear Discriminant Analysis (LDA) [2]. PCA is an eigenvector method designed to model linear vari... |

127 |
Nonlinear Dimensionality Reduction by Locally
- Roweis, Saul
- 2000
(Show Context)
Citation Context ...orm LDA, and also that PCA is less sensitive to different training datasets. 2sRecently, a number of research efforts have shown that the face images possibly reside on a nonlinear submanifold [7][10]=-=[18]-=-[19][21][23][27]. However, both PCA and LDA effectively see only the Euclidean structure. They fail to discover the underlying structure, if the face images lie on a nonlinear submanifold hidden in th... |

113 | Support vector machines applied to face recognition
- Phillips
- 1999
(Show Context)
Citation Context ...n example of the original face image and the cropped image. Different pattern classifiers have been applied for face recognition, including nearest-neighbor [2], Bayesian [15], Support Vector Machine =-=[17]-=-, etc. In this paper, we apply the nearest-neighbor classifier for its simplicity. In short, the recognition process has three steps. First, we calculate the Laplacianfaces from the training set of fa... |

96 |
Using manifold structure for partially labelled classification
- Belkin, Niyogi
(Show Context)
Citation Context ...tially an unsupervised learning process. And in many practical cases, one finds a wealth of easily available unlabeled samples. These samples might help to discover the face manifold. For example, in =-=[4]-=-, it is shown how unlabeled samples are used for discovering the manifold structure and hence improving the classification accuracy. Since the face images are believed to reside on a sub-manifold embe... |

88 | Global coordination of local linear models - Roweis, Saul, et al. |

64 |
Boosting chain learning for object detection
- Xiao, Zhu, et al.
- 2003
(Show Context)
Citation Context ...2 32 pixels, with 256 gray levels per pixel. Thus, each image is repre-sented by a 1,024-dimensional vector in image space. The details of our methods for face detection and alignment can be found in =-=[30]-=-, [32]. No further preprocessing is done. Fig. 5 shows an example of the original face image and the cropped image. Different pattern classifiers have been applied for face recognition, including near... |

41 |
Face recognition using kernel based Fisher discriminant analysis
- Liu, Huang, et al.
- 2002
(Show Context)
Citation Context ...vision tasks, such as face recognition. In the meantime, there has been some interest in the problem of developing low dimensional representations through kernel based techniques for face recognition =-=[13]-=-[19]. These methods can discover the nonlinear structure of the face images. However, they are computationally expensive. Moreover, none of them explicitly considers the structure of the manifold on w... |

35 | Manifold of facial expression
- Chang, Hu, et al.
(Show Context)
Citation Context ...outperform LDA, and also that PCA is less sensitive to different training datasets. 2sRecently, a number of research efforts have shown that the face images possibly reside on a nonlinear submanifold =-=[7]-=-[10][18][19][21][23][27]. However, both PCA and LDA effectively see only the Euclidean structure. They fail to discover the underlying structure, if the face images lie on a nonlinear submanifold hidd... |

29 |
An Efficient LDA Algorithm for Face Recognition
- Yang, Yu, et al.
- 2000
(Show Context)
Citation Context ...ensional spaces are too large to allow robust and fast face recognition. A common way to attempt to resolve this problem is to use dimensionality reduction techniques [1][2][8][11][12][14][22][26][28]=-=[32]-=-[35]. Two of the most popular techniques for this purpose are Principal Component Analysis (PCA) [28] and Linear Discriminant Analysis (LDA) [2]. PCA is an eigenvector method designed to model linear ... |

22 | Manifold pursuit: A new approach to appearance based recognition,” presented at the Int
- Shashua, Levin, et al.
- 2002
(Show Context)
Citation Context ... also that PCA is less sensitive to different training datasets. 2sRecently, a number of research efforts have shown that the face images possibly reside on a nonlinear submanifold [7][10][18][19][21]=-=[23]-=-[27]. However, both PCA and LDA effectively see only the Euclidean structure. They fail to discover the underlying structure, if the face images lie on a nonlinear submanifold hidden in the image spac... |

21 |
Decomposed eigenface for face recognition under various lighting conditions
- Shakunaga, Shigenari
- 2001
(Show Context)
Citation Context ...hese n×m dimensional spaces are too large to allow robust and fast face recognition. A common way to attempt to resolve this problem is to use dimensionality reduction techniques [1][2][8][11][12][14]=-=[22]-=-[26][28][32][35]. Two of the most popular techniques for this purpose are Principal Component Analysis (PCA) [28] and Linear Discriminant Analysis (LDA) [2]. PCA is an eigenvector method designed to m... |

20 | Ranking prior likelihood distributions for bayesian shape localization framework
- Yan, Li, et al.
- 2003
(Show Context)
Citation Context ...ixels, with 256 gray levels per pixel. Thus, each image is repre-sented by a 1,024-dimensional vector in image space. The details of our methods for face detection and alignment can be found in [30], =-=[32]-=-. No further preprocessing is done. Fig. 5 shows an example of the original face image and the cropped image. Different pattern classifiers have been applied for face recognition, including nearest-ne... |

17 |
M.: Linear Subspaces for Illumination Robust Face Recognition
- Batur, Hayes
- 2001
(Show Context)
Citation Context ... practice, however, these n×m dimensional spaces are too large to allow robust and fast face recognition. A common way to attempt to resolve this problem is to use dimensionality reduction techniques =-=[1]-=-[2][8][11][12][14][22][26][28][32][35]. Two of the most popular techniques for this purpose are Principal Component Analysis (PCA) [28] and Linear Discriminant Analysis (LDA) [2]. PCA is an eigenvecto... |

16 | and Shashua , ―Principal Component Analysis over Continuous Subspaces and
- Levin
- 2002
(Show Context)
Citation Context ..., however, these n×m dimensional spaces are too large to allow robust and fast face recognition. A common way to attempt to resolve this problem is to use dimensionality reduction techniques [1][2][8]=-=[11]-=-[12][14][22][26][28][32][35]. Two of the most popular techniques for this purpose are Principal Component Analysis (PCA) [28] and Linear Discriminant Analysis (LDA) [2]. PCA is an eigenvector method d... |

7 |
Niyogi P.: ‘Locality Preserving
- He
(Show Context)
Citation Context ...be specific, the manifold structure is modeled by a nearest-neighbor graph which preserves the local structure of the image space. A face subspace is obtained by Locality Preserving Projections (LPP) =-=[9]-=-. Each face image in the image space is mapped to a lowdimensional face subspace, which is characterized by a set of feature images, called Laplacianfaces. The face subspace preserves local structure,... |

5 |
Where to Go with Face Recognition
- Gross, Shi, et al.
- 2001
(Show Context)
Citation Context ...ice, however, these n×m dimensional spaces are too large to allow robust and fast face recognition. A common way to attempt to resolve this problem is to use dimensionality reduction techniques [1][2]=-=[8]-=-[11][12][14][22][26][28][32][35]. Two of the most popular techniques for this purpose are Principal Component Analysis (PCA) [28] and Linear Discriminant Analysis (LDA) [2]. PCA is an eigenvector meth... |

5 |
Face Database
- Univ
- 2002
(Show Context)
Citation Context ...methods in face recognition. In this study, three face databases were tested. The first one is the PIE (pose, illumination, and expression) database from CMU [25], the second one is the Yale database =-=[30]-=-, and the third one is the MSRA database collected at the Microsoft Research Asia. In all the experiments, preprocessing to locate the faces was applied. Original images were normalized (in scale and ... |

5 |
Face Database, http://cvc.yale.edu/projects /yalefaces/yalefaces.html
- Univ
- 2002
(Show Context)
Citation Context ... LPP and Laplacian Eigenmap. In this study, three face databases were tested. The first one is the PIE (pose, illumination, and expression) database from CMU [25], the second one is the Yale database =-=[31]-=-, and the third one is the MSRA database collected at the Microsoft Research Asia. In all the experiments, preprocessing to locate the faces was applied. Original images were normalized (in scale and ... |

1 |
Isometric Embedding and Continuum
- Zha, Zhang
- 2003
(Show Context)
Citation Context ...ps that are defined only on the training data points and how to evaluate the maps on novel test data points remains unclear. Therefore, these nonlinear manifold learning techniques [3][5][18][20] [27]=-=[33]-=- might not be suitable for some computer vision tasks, such as face recognition. In the meantime, there has been some interest in the problem of developing low dimensional representations through kern... |