Results 11 - 20
of
124
Face recognition using multi-viewpoint patterns for robot vision
- In: International Symposium of Robotics Research
, 2003
"... Abstract. This paper introduces a novel approach for face recognition using multiple face patterns obtained in various views for robot vision. A face pattern may change dramatically due to changes in the relation between the positions of a robot, a subject and light sources. As a robot is not genera ..."
Abstract
-
Cited by 68 (10 self)
- Add to MetaCart
(Show Context)
Abstract. This paper introduces a novel approach for face recognition using multiple face patterns obtained in various views for robot vision. A face pattern may change dramatically due to changes in the relation between the positions of a robot, a subject and light sources. As a robot is not generally able to ascertain such changes by itself, face recognition in robot vision must be robust against variations caused by the changes. Conventional methods using a single face pattern are not capable of dealing with the variations of face pattern. In order to overcome the problem, we have developed a face recognition method based on the constrained mutual subspace method (CMSM) using multi-viewpoint face patterns attributable to the movement of a robot or a subject. The effectiveness of our method for robot vision is demonstrated by means of a preliminary experiment. 1
Illumination Normalization for Robust Face Recognition against Varying Lighting Conditions
- IEEE Workshop on AMFG’03
, 2003
"... Evaluations of the state-of-the-art of both academic face recognition algorithms and commercial systems have shown that recognition performance of most current technologies degrades due to the variations of illumination. This paper investigates several illumination normalization methods and proposes ..."
Abstract
-
Cited by 66 (7 self)
- Add to MetaCart
(Show Context)
Evaluations of the state-of-the-art of both academic face recognition algorithms and commercial systems have shown that recognition performance of most current technologies degrades due to the variations of illumination. This paper investigates several illumination normalization methods and proposes some novel solutions. The main contribution of this paper includes: (1) A Gamma Intensity Correction (GIC) method is proposed to normalize the overall image intensity at the given illumination level; (2) A Region-based strategy combining GIC and the Histogram Equalization (HE) is proposed to further eliminate the side-lighting effect; (3) A Quotient Illumination Relighting (QIR) method is presented to synthesize images under a pre-defined normal lighting condition from the provided face images captured under non-normal lighting condition. These methods are evaluated and compared on the Yale illumination face database B and Harvard illumination face database. Considerable improvements are observed. Some conclusions are given at last. 1.
Comparing Images Under Variable Illumination
, 1998
"... We consider the problem of determining whether two images come from different objects or the same object in the same pose, but under different illumination conditions. We show that this problem cannot be solved using hard constraints: even using a Lambertian reflectance model, there is always an obj ..."
Abstract
-
Cited by 63 (5 self)
- Add to MetaCart
We consider the problem of determining whether two images come from different objects or the same object in the same pose, but under different illumination conditions. We show that this problem cannot be solved using hard constraints: even using a Lambertian reflectance model, there is always an object and a pair of lighting conditions consistent with any two images. Nevertheless, we show that for point sources and objects with Lambertian reflectance, the ratio of two images from the same object is simpler than the ratio of images from different objects. We also show that the ratio of the two images provides two of the three distinct values in the Hessian matrix of the object’s surface. Using these observations, we develop a simple measure for matching images under variable illumination, comparing its performance to other existing methods on a database of 450 images of 10 individuals.
Nine Points of Lights: Acquiring Subspaces for Face Recognition under Variable Lightning
, 2001
"... Previous work has demonstrated that the image variations of many objects (human faces in particular) under variable lighting can be effectively modeled by low dimensional linear spaces. Basis images spanning this space are usually obtained in one of two ways: A large number of images of the object u ..."
Abstract
-
Cited by 62 (4 self)
- Add to MetaCart
(Show Context)
Previous work has demonstrated that the image variations of many objects (human faces in particular) under variable lighting can be effectively modeled by low dimensional linear spaces. Basis images spanning this space are usually obtained in one of two ways: A large number of images of the object under different conditions is acquired, and principal component analysis (PCA) is used to estimate a subspace. Alternatively, a 3-D model (perhaps reconstructed from images) is used to render virtual images under either point sources from which a subspace is derived using PCA or more recently under diffuse synthetic lighting based on spherical harmonics. In this paper, we show that there exists a configuration of nine point light source directions such that by taking nine images of each individual under these single sources, the resulting subspace is effective at recognition under a wide range of lighting conditions. Since the subspace is generated directly from real images, potentially complex intermediate steps such as PCA and 3D reconstruction can be completely avoided; nor is it necessary to acquire large numbers of training images or physically construct complex diffuse (harmonic) light fields. We provide both theoretical and empirical results to explain why these linear spaces should be good for recognition.
Joint Manifold Distance: a new approach to appearance based clustering,”
- IEEE Conference on Computer Vision and Pattern Recognition (CVPR ’03) -
, 2003
"... ..."
(Show Context)
Estimating 3d shape and texture using pixel intensity, edges, specular highlights, texture constraints and a prior
- Edges, Specular Highlights, Texture Constraints and a Prior, Proceedings of Computer Vision and Pattern Recognition
, 2005
"... We present a novel algorithm aiming to estimate the 3D shape, the texture of a human face, along with the 3D pose and the light direction from a single photograph by recovering the parameters of a 3D Morphable Model. Generally, the algorithms tackling the problem of 3D shape estimation from image da ..."
Abstract
-
Cited by 38 (4 self)
- Add to MetaCart
(Show Context)
We present a novel algorithm aiming to estimate the 3D shape, the texture of a human face, along with the 3D pose and the light direction from a single photograph by recovering the parameters of a 3D Morphable Model. Generally, the algorithms tackling the problem of 3D shape estimation from image data use only the pixels intensity as input to drive the estimation process. This was previously achieved using either a simple model, such as the Lambertian reflectance model, leading to a linear fitting algorithm. Alternatively, this problem was addressed using a more precise model and minimizing a non-convex cost function with many local minima. One way to reduce the local minima problem is to use a stochastic optimization algorithm. However, the convergence properties (such as the radius of convergence) of such algorithms, are limited. Here, as well as the pixel intensity, we use various image features such as the edges or the location of the specular highlights. The 3D shape, texture and imaging parameters are then estimated by maximizing the posterior of the parameters given these image features. The overall cost function obtained is smoother and, hence, a stochastic optimization algorithm is not needed to avoid the local minima problem. This leads to the Multi-Features Fitting algorithm that has a wider radius of convergence and a higher level of precision. This is shown on some example photographs, and on a recognition experiment performed on the CMU-PIE image database. 1.
Image-based Modeling and Rendering of Surfaces with Arbitrary BRDFs
- In Proc. of Computer Vision and Pattern Recognition
, 2001
"... A goal of image-based rendering is to synthesize as realistically as possible man made and natural objects. This paper presents a method for image-based modeling and rendering of objects with arbitrary (possibly anisotropic and spatially varying) BRDFs. An object is modeled by sampling the surface & ..."
Abstract
-
Cited by 33 (6 self)
- Add to MetaCart
A goal of image-based rendering is to synthesize as realistically as possible man made and natural objects. This paper presents a method for image-based modeling and rendering of objects with arbitrary (possibly anisotropic and spatially varying) BRDFs. An object is modeled by sampling the surface 's incident light field to reconstruct a non-parametric apparent BRDF at each visible point on the surface. This can be used to render the object from the same viewpoint but under arbitrarily specified illumination. We demonstrate how these object models can be embedded in synthetic scenes and rendered under global illumination which captures the interreflections between real and synthetic objects. We also show how these image-based models can be automatically composited onto video footage with dynamic illumination so that the effects (shadows and shading) of the lighting on the composited object match those of the scene.
Characterization of Human Faces under Illumination Variations Using Rank, Integrability, and Symmetry Constraints
, 2004
"... Photometric stereo algorithms use a Lambertian reflectance model with a varying albedo field and involve the appearances of only one object. This paper extends photometric stereo algorithms to handle all the appearances of all the objects in a class, in particular the class of human faces. Simil ..."
Abstract
-
Cited by 32 (10 self)
- Add to MetaCart
Photometric stereo algorithms use a Lambertian reflectance model with a varying albedo field and involve the appearances of only one object. This paper extends photometric stereo algorithms to handle all the appearances of all the objects in a class, in particular the class of human faces. Similarity among all facial appearances motivates a rank constraint on the albedos and surface normals in the class. This leads to a factorization of an observation matrix that consists of exemplar images of di#erent objects under di#erent illuminations, which is beyond what can be analyzed using bilinear analysis. Bilinear analysis requires exemplar images of di#erent objects under same illuminations. To fully recover the class-specific albedos and surface normals, integrability and face symmetry constraints are employed. The proposed linear algorithm takes into account the e#ects of the varying albedo field by approximating the integrability terms using only the surface normals. As an application, face recognition under illumination variation is presented.
Illumination Multiplexing within Fundamental Limits
"... Taking a sequence of photographs using multiple illumination sources or settings is central to many computer vision and graphics problems. A growing number of recent methods use multiple sources rather than single point sources in each frame of the sequence. Potential benefits include increased sign ..."
Abstract
-
Cited by 31 (5 self)
- Add to MetaCart
(Show Context)
Taking a sequence of photographs using multiple illumination sources or settings is central to many computer vision and graphics problems. A growing number of recent methods use multiple sources rather than single point sources in each frame of the sequence. Potential benefits include increased signal-to-noise ratio and accommodation of scene dynamic range. However, existing multiplexing schemes, including Hadamard-based codes, are inhibited by fundamental limits set by Poisson distributed photon noise and by sensor saturation. The prior schemes may actually be counterproductive due to these effects. We derive multiplexing codes that are optimal under these fundamental effects. Thus, the novel codes generalize the prior schemes and have a much broader applicability. Our approach is based on formulating the problem as a constrained optimization. We further suggest an algorithm to solve this optimization problem. The superiority and effectiveness of the method is demonstrated in experiments involving object illumination.