Results 1 - 10
of
31
Acquiring linear subspaces for face recognition under variable lighting
- IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2005
"... Previous work has demonstrated that the image variation of many objects (human faces in particular) under variable lighting can be effectively modeled by low dimensional linear spaces, even when there are multiple light sources and shadowing. Basis images spanning this space are usually obtained in ..."
Abstract
-
Cited by 317 (2 self)
- Add to MetaCart
Previous work has demonstrated that the image variation of many objects (human faces in particular) under variable lighting can be effectively modeled by low dimensional linear spaces, even when there are multiple light sources and shadowing. Basis images spanning this space are usually obtained in one of three ways: A large set of images of the object under different lighting conditions is acquired, and principal component analysis (PCA) is used to estimate a subspace. Alternatively, synthetic images are rendered from a 3D model (perhaps reconstructed from images) under point sources, and again PCA is used to estimate a subspace. Finally, images rendered from a 3D model under diffuse lighting based on spherical harmonics are directly used as basis images. In this paper, we show how to arrange physical lighting so that the acquired images of each object can be directly used as the basis vectors of a low-dimensional linear space, and that this subspace is close to those acquired by the other methods. More specifically, there exist configurations of k point light source directions, with k typically ranging from 5 to 9, such that by taking k images of an object under these single sources, the resulting subspace is an effective representation for recognition under a wide range of lighting conditions. Since the subspace is generated directly from real images, potentially complex and/or brittle intermediate steps such as 3D reconstruction can be completely avoided; nor is it necessary to acquire large numbers of training images or to physically construct complex diffuse (harmonic) light fields. We validate the use of subspaces constructed in this fashion within the context of face recognition.
Video-based face recognition using probabilistic appearance manifolds
- In Proc. IEEE Conference on Computer Vision and Pattern Recognition
, 2003
"... This paper presents a novel method to model and recognize human faces in video sequences. Each registered person is represented by a low-dimensional appearance manifold in the ambient image space. The complex nonlinear appearance manifold expressed as a collection of subsets (named pose manifolds), ..."
Abstract
-
Cited by 176 (5 self)
- Add to MetaCart
(Show Context)
This paper presents a novel method to model and recognize human faces in video sequences. Each registered person is represented by a low-dimensional appearance manifold in the ambient image space. The complex nonlinear appearance manifold expressed as a collection of subsets (named pose manifolds), and the connectivity among them. Each pose manifold is approximated by an affine plane. To construct this representation, exemplars are sampled from videos, and these exemplars are clustered with a K-means algorithm; each cluster is represented as a plane computed through principal component analysis (PCA). The connectivity between the pose manifolds encodes the transition probability between images in each of the pose manifold and is learned from a training video sequences. A maximum a posteriori formulation is presented for face recognition in test video sequences by integrating the likelihood that the input image comes from a particular pose manifold and the transition probability to this pose manifold from the previous frame. To recognize faces with partial occlusion, we introduce a weight mask into the process. Extensive experiments demonstrate that the proposed algorithm outperforms existing frame-based face recognition methods with temporal voting schemes. 1
3d face reconstruction from a single image using a single reference face shape
- IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI
"... Abstract—Human faces are remarkably similar in global properties, including size, aspect ratio, and location of main features, but can vary considerably in details across individuals, gender, race, or due to facial expression. We propose a novel method for 3D shape recovery of faces that exploits th ..."
Abstract
-
Cited by 24 (1 self)
- Add to MetaCart
(Show Context)
Abstract—Human faces are remarkably similar in global properties, including size, aspect ratio, and location of main features, but can vary considerably in details across individuals, gender, race, or due to facial expression. We propose a novel method for 3D shape recovery of faces that exploits the similarity of faces. Our method obtains as input a single image and uses a mere single 3D reference model of a different person’s face. Classical reconstruction methods from single images, i.e., shape-from-shading, require knowledge of the reflectance properties and lighting as well as depth values for boundary conditions. Recent methods circumvent these requirements by representing input faces as combinations (of hundreds) of stored 3D models. We propose instead to use the input image as a guide to “mold ” a single reference model to reach a reconstruction of the sought 3D shape. Our method assumes Lambertian reflectance and uses harmonic representations of lighting. It has been tested on images taken under controlled viewing conditions as well as on uncontrolled images downloaded from the Internet, demonstrating its accuracy and robustness under a variety of imaging conditions and overcoming significant differences in shape between the input and reference individuals including differences in facial expressions, gender, and race. Index Terms—Computer vision, photometry, shape from shading, 3D reconstruction, lighting, single images, face, depth reconstruction. Ç 1
Local Facial Asymmetry for Expression Classification
- Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition
, 2004
"... We explore a novel application of facial asymme-try: expression classication. Using 2D facial expres-sion images, we show the eectiveness of automatically selected local facial asymmetry for expression recogni-tion. Quantitative evaluations of expression classica-tion using local asymmetry demonstra ..."
Abstract
-
Cited by 19 (4 self)
- Add to MetaCart
(Show Context)
We explore a novel application of facial asymme-try: expression classication. Using 2D facial expres-sion images, we show the eectiveness of automatically selected local facial asymmetry for expression recogni-tion. Quantitative evaluations of expression classica-tion using local asymmetry demonstrate statistically signi cant improvements over expression classication results on the same data set without explicit represen-tation of facial asymmetry. A comparison of discrim-inative local facial asymmetry features for expression classi cation versus human identication is given. 1
Molding face shapes by example
- In Proc. European Conference in Computer Vision
, 2006
"... Abstract. Human faces are remarkably similar in global properties, including size, aspect ratios, and locations of main features, but can vary considerably in details across individuals, gender, race, or due to facial expression. We propose a novel method for 3D shape recovery of a face from a singl ..."
Abstract
-
Cited by 9 (3 self)
- Add to MetaCart
(Show Context)
Abstract. Human faces are remarkably similar in global properties, including size, aspect ratios, and locations of main features, but can vary considerably in details across individuals, gender, race, or due to facial expression. We propose a novel method for 3D shape recovery of a face from a single image using a single 3D reference model of a different person’s face. The method uses the input image as a guide to mold the reference model to reach a desired reconstruction. Assuming Lambertian reflectance and rough alignment of the input image and reference model, we seek shape, albedo, and lighting that best fit the image while preserving the rough structure of the model. We demonstrate our method by providing accurate reconstructions of novel faces overcoming significant differences in shape due to gender, race, and facial expressions. 1
Face shape recovery from a single image using cca mapping between tensor spaces
- In Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition
, 2008
"... In this paper, we propose a new approach for face shape recovery from a single image. A single near infrared (NIR) image is used as the input, and a mapping from the NIR tensor space to 3D tensor space, learned by using statistical learning, is used for the shape recovery. In the learning phase, the ..."
Abstract
-
Cited by 9 (2 self)
- Add to MetaCart
(Show Context)
In this paper, we propose a new approach for face shape recovery from a single image. A single near infrared (NIR) image is used as the input, and a mapping from the NIR tensor space to 3D tensor space, learned by using statistical learning, is used for the shape recovery. In the learning phase, the two tensor models are constructed for NIR and 3D images respectively, and a canonical correlation analysis (CCA) based multi-variate mapping from NIR to 3D faces is learned from a given training set of NIR-3D face pairs. In the reconstruction phase, given an NIR face image, the depth map is computed directly using the learned mapping with the help of tensor models. Experimental results are provided to evaluate the accuracy and speed of the method. The work provides a practical solution for reliable and fast shape recovery and modeling of 3D objects. 1.
Video-based Face Recognition: A Survey
"... Abstract—During the past several years, face recognition in video has received significant attention. Not only the wide range of commercial and law enforcement applications, but also the availability of feasible technologies after several decades of research contributes to the trend. Although curren ..."
Abstract
-
Cited by 7 (0 self)
- Add to MetaCart
(Show Context)
Abstract—During the past several years, face recognition in video has received significant attention. Not only the wide range of commercial and law enforcement applications, but also the availability of feasible technologies after several decades of research contributes to the trend. Although current face recognition systems have reached a certain level of maturity, their development is still limited by the conditions brought about by many real applications. For example, recognition images of video sequence acquired in an open environment with changes in illumination and/or pose and/or facial occlusion and/or low resolution of acquired image remains a largely unsolved problem. In other words, current algorithms are yet to be developed. This paper provides an up-to-date survey of video-based face recognition research. To present a comprehensive survey, we categorize existing video based recognition approaches and present detailed descriptions of representative methods within each category. In addition, relevant topics such as real time detection, real time tracking for video, issues such as illumination, pose, 3D and low resolution are covered. Keywords—Face recognition, video-based, survey I.
Shape from shading under various imaging conditions
- In International Conference on Computer Vision and Pattern Recognition CVPR07
, 2007
"... Most of the shape from shading (SFS) algorithms have been developed under the simplifying assumptions of a Lambertian surface, an orthographic projection, and a dis-tant light source. Due to the difficulty of the SFS prob-lem, only a small number of algorithms have been proposed for surfaces with no ..."
Abstract
-
Cited by 6 (1 self)
- Add to MetaCart
(Show Context)
Most of the shape from shading (SFS) algorithms have been developed under the simplifying assumptions of a Lambertian surface, an orthographic projection, and a dis-tant light source. Due to the difficulty of the SFS prob-lem, only a small number of algorithms have been proposed for surfaces with non-Lambertian reflectance, and among those, only very few algorithms are applicable for surfaces with specular and diffuse reflectance. In this paper we pro-pose a unified framework that is capable of solving the SFS problem under various settings of imaging conditions i.e., Lambertian or non-Lambertian, orthographic or perspec-tive projection, and distant or nearby light source. The pro-posed algorithm represents the image irradiance equation of each setting as an explicit Partial Differential Equation (PDE). In our implementation we use the Lax-Friedrichs sweeping method to solve this PDE. To demonstrate the effi-ciency of the proposed algorithm, several comparisons with the state of the art of the SFS literature are given. 1.
Pose-encoded spherical harmonics for robust face recognition using a single image
- In Wenyi Zhao, Shaogang Gong, and Xiaoou Tang, editors, Proc. Workshop on Analysis and Modelling of Faces and Gestures, volume LNCS-3723.Springer-Verlag
, 2005
"... Abstract. Face recognition under varying pose is a challenging problem, especially when illumination variations are also present. Under Lambertian model, spherical harmonics representation has proved to be effective in modelling illumination variations for a given pose. In this paper, we extend the ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
(Show Context)
Abstract. Face recognition under varying pose is a challenging problem, especially when illumination variations are also present. Under Lambertian model, spherical harmonics representation has proved to be effective in modelling illumination variations for a given pose. In this paper, we extend the spherical harmonics representation to encode pose information. More specifically, we show that 2D harmonic basis images at different poses are related by close-form linear combinations. This enables an analytic method for generating new basis images at a different pose which are typically required to handle illumination variations at that particular pose. Furthermore, the orthonormality of the linear combinations is utilized to propose an efficient method for robust face recognition where only one set of front-view basis images per subject is stored. In the method, we directly project a rotated testing image onto the space of front-view basis images after establishing the image correspondence. Very good recognition results have been demonstrated using this method. 1
Normalization of Face Illumination Based on Large-and Small-Scale Features
, 2009
"... Abstract—A face image can be represented by a combination of large-and small-scale features. It is well-known that the variations of illumination mainly affect the large-scalefeatures (low-frequency components), and not so much the small-scale features. Therefore, in relevant existing methods only t ..."
Abstract
-
Cited by 3 (1 self)
- Add to MetaCart
(Show Context)
Abstract—A face image can be represented by a combination of large-and small-scale features. It is well-known that the variations of illumination mainly affect the large-scalefeatures (low-frequency components), and not so much the small-scale features. Therefore, in relevant existing methods only the small-scale features are extracted as illumination-invariant features for face recognition, while the large-scale intrinsic features are always ignored. In this paper, we argue that both large-and small-scale features of a face image are important for face restoration and recognition. Moreover, we suggest that illumination normalization should be performed mainly on the large-scale features of a face image rather than on the original face image. A novel method of normalizing both the Small-and Large-scale (S&L) features of a face image is proposed. In thismethod, a single face image isfirstdecomposed into large-and small-scale features. After that, illumination normalization is mainly performed on the large-scale features, and only a minor correction is made on the small-scale features. Finally, a normalized face image is generated by combining the processed large-and small-scale features. In addition, an optional visual compensation step is suggested for improving the visual quality of the normalized image. Experiments on CMU-PIE, Extended Yale B, and FRGC 2.0 face databases show that by using the proposed method significantly better recognition performance and visual results can be obtained as compared to related state-of-the-art methods. Index Terms—Face recognition, illumination normalization, visual compensation. I.