#### DMCA

## Bayesian Face Revisited: A Joint Formulation

Citations: | 38 - 2 self |

### Citations

2309 | Eigenfaces vs. fisherfaces: Recognition using class specific linear proposed systemion.
- Belhumeur, Hespanha, et al.
- 1997
(Show Context)
Citation Context ...nce matrixes which need to be learned from the data. We may approximate them by between-class covariance matrix and within-class covariance matrix used in classic Linear Discriminative Analysis (LDA) =-=[17]-=-. Actually, this approximation could provide a fair good estimation. But to get more accurate estimation in a principled way, we develop an EM-like algorithm to jointly estimate two matrixes in our mo... |

1299 | Multiresolution gray-scale and rotation invariant texture classification with local binary patterns
- Ojala, Pietikainen, et al.
- 2002
(Show Context)
Citation Context ...by a face detector and rectified by affine transform estimated by five landmarks, i.e. eyes, nose and mouth corners. The landmarks are detected by [23]. We extract two kind of low-level features: LBP =-=[24]-=- and LE [25] in rectified holistic face. Our final data set contains 99,773 images of 2995 people, and 2065 of them have more than 15 images. The large width and depth of this dataset will enable rese... |

695 | Distance metric learning for large margin nearest neighbor classification
- Weinberger, Saul
(Show Context)
Citation Context ... each subject with m images, the relationship between the latent variables h = [µ;ε1;··· ;εm] and the observations x = [x1;··· ;xm] is: ⎡ ⎤ I I 0 ··· 0 I 0 I ··· 0 x = Ph, where P = ⎢ ⎣ . . . . .. ⎥. =-=(7)-=- . ⎦ I 0 0 ··· I And the distribution of the hidden variable h is h ∼ N(0,Σh), where Σh = diag(Sµ,Sε,··· ,Sε). Therefore, based on Eqn. (7), we have the distribution of x: where x ∼ N(0,Σx), ⎡ Sµ +Sε ... |

449 | Labeled faces in the wild: A database for studying face recognition in unconstrained environments
- Huang, Mattar, et al.
- 2008
(Show Context)
Citation Context ...jects and having enough images of each subject. However, the current large face datasets in the wild condition suffer from either small width (Pubfig [11]) or small depth (Labeled Faces in Wild (LFW) =-=[15]-=-). To address this issue, we introduce a new dataset,Wide and Deep Referencedataset(WDRef), whichis both wide (around 3,000 subjects) and deep (2,000+ subjects with over 15 images, 1,000+ subjects wit... |

325 | Attribute and simile classifiers for face verification.
- Kumar, Berg, et al.
- 2009
(Show Context)
Citation Context ...“wide”and “deep”:havinglargenumber ofdifferent subjects and having enough images of each subject. However, the current large face datasets in the wild condition suffer from either small width (Pubfig =-=[11]-=-) or small depth (Labeled Faces in Wild (LFW) [15]). To address this issue, we introduce a new dataset,Wide and Deep Referencedataset(WDRef), whichis both wide (around 3,000 subjects) and deep (2,000+... |

174 |
Bayesian face recognition,”
- Moghaddam, Jebara, et al.
- 2000
(Show Context)
Citation Context ...face set given a gallery face set. In this paper, we focus on the verification problem, which is more widely applicable and lay the foundation of the identification problem. Bayesian face recognition =-=[1]-=- by Baback Moghaddam et al. is one of representative and successful face verification methods. It formulates the verification task as a binary Bayesian decision problem. Let HI representsthe intra-per... |

159 | Is that you? Metric learning approaches for face identification
- Guillaumin, Verbeek, et al.
- 2009
(Show Context)
Citation Context ... TPLBP and FPLBP) with a linear SVM classifier. The same feature combination could also be find in [12]. As shown in Table 3, our joint Bayesian method achieves the highest accuracy. Method LDML-MkNN =-=[9]-=- Multishot [12] PLDA [19] Joint Bayesian Accuracy 87.5% 89.50% 90.07% 90.90% Table 3. Comparison with state of the arts method following the LFW unrestricted protocol. The results of other methods are... |

103 | A unified framework for subspace face recognition”,
- Wang, Tang
- 2004
(Show Context)
Citation Context ...learning and efficient computation. Because of the simplicity and competitive performance [2] of Bayesian face, further progresses have been made along this research lines. For example, Wang and Tang =-=[3]-=- propose a unified framework for subspace face recognition which decomposes the face difference into three subspaces: intrinsic difference, transformation difference and noise. By excluding the transf... |

70 |
Cosine similarity metric learning for face verification.
- Nguyen, Bai
- 2010
(Show Context)
Citation Context ...ion boundary. There are many works trying to develop different learning methods for Mahalanobis distance. However, relatively few works investigate the other forms of metrics like ours. A recent work =-=[22]-=- explores the metric based on cosine similarity by discriminative learning. Their promising results may inspire us to learn the log likelihood ratio metric in a discriminative way in the future work.... |

56 | An associate-Predict model for face recognition.
- Yin, Tang, et al.
- 2011
(Show Context)
Citation Context ...em which takes the additional advantages of an accurate 3D normalization and billions of training samples. 1 0.9 true positive rate 0.8 0.7 0.6 Attribute and Simile classifiers [11] Associate−Predict =-=[13]-=- face.com r2011b [16] Joint Bayesian (combined) 0.5 0 0.1 0.2 0.3 0.4 0.5 false positive rate Fig.6. The ROC curve of Joint Bayesian method comparing with the state of the art methods which also rely ... |

54 | Multiple one-shots for utilizing class label information.
- Taigman, Wolf, et al.
- 2009
(Show Context)
Citation Context ...nrestricted protocol, using only LFW for training. We combine the scores of 4 descriptors (SIFT, LBP, TPLBP and FPLBP) with a linear SVM classifier. The same feature combination could also be find in =-=[12]-=-. As shown in Table 3, our joint Bayesian method achieves the highest accuracy. Method LDML-MkNN [9] Multishot [12] PLDA [19] Joint Bayesian Accuracy 87.5% 89.50% 90.07% 90.90% Table 3. Comparison wit... |

52 | Probabilistic models for inference about identity.
- Prince, Li, et al.
- 2011
(Show Context)
Citation Context ...rformance only with a few top eigenvectors, and its performance will decrease if more projections are added even though there are still useful information for discrimination. Probabilistic LDA (PLDA) =-=[18,19]-=- uses factor analysis to decompose the face into three factors: x = Bα + Wβ + ξ, i. e. identity Bα, intra-personal variation Wβ and noise ξ. The latent variables α, β and ξ are assumed as Gaussian dis... |

46 | Distance Metric Learning with Eigenvalue Optimization.
- Ying, Li
- 2012
(Show Context)
Citation Context ...acy. In our method, using the joint formulation, the metric in Eqn. (4) is free from the abovedisadvantage.Tomakethe connection moreclear,wereformulateEqn. (4) as, (x1 −x2) T A(x1 −x2)+2x1 T (A−G)x2. =-=(10)-=- Comparing Eqn. (9) and Eqn. (10), we see that the joint formulation provides an additional freedom for the discriminant surface. The new metric could be viewed as more general distance which better p... |

37 |
P.J.: The feret evaluation methodology for face-recognition algorithms.
- Phillips, Moon, et al.
- 2000
(Show Context)
Citation Context ... [1], two conditional probabilities in Eqn. (1) are modeled as Gaussians and eigen analysis is used for model learning and efficient computation. Because of the simplicity and competitive performance =-=[2]-=- of Bayesian face, further progresses have been made along this research lines. For example, Wang and Tang [3] propose a unified framework for subspace face recognition which decomposes the face diffe... |

36 |
D.J.: Eigenfaces vs. fisherfaces: Recognition using class specific linear projection.
- Belhumeur, Hespanha, et al.
- 1997
(Show Context)
Citation Context ... promising results may inspire us to learn the log likelihood ratio metric in a discriminative way in the future work.3.2 Connection with LDA and Probabilistic LDA Linear Discriminant Analysis (LDA) =-=[17]-=- learns discriminative projecting directions by maximizing the between-class variation and minimizing within-class variation. The solution for the projections are the eigenvectors of an eigen problem.... |

27 | Face Alignment via Component-Based Discriminative Search,”
- Liang, Xiao, et al.
- 2008
(Show Context)
Citation Context ...s and the names in LFW. Then, the faces are detected by a face detector and rectified by affine transform estimated by five landmarks, i.e. eyes, nose and mouth corners. The landmarks are detected by =-=[23]-=-. We extract two kind of low-level features: LBP [24] and LE [25] in rectified holistic face. Our final data set contains 99,773 images of 2995 people, and 2065 of them have more than 15 images. The l... |

26 |
Probabilistic linear discriminant analysis.
- Ioffe
- 2006
(Show Context)
Citation Context ...rformance only with a few top eigenvectors, and its performance will decrease if more projections are added even though there are still useful information for discrimination. Probabilistic LDA (PLDA) =-=[18,19]-=- uses factor analysis to decompose the face into three factors: x = Bα + Wβ + ξ, i. e. identity Bα, intra-personal variation Wβ and noise ξ. The latent variables α, β and ξ are assumed as Gaussian dis... |

24 | Local distance functions: A taxonomy, new algorithms, and an evaluation. - Ramanan, Baker - 2011 |

18 | Modeling the joint density of two images under a variety of transformations. - Susskind, Memisevic, et al. - 2011 |

16 | X.: Bayesian face recognition using support vector machine and face clustering
- Li, Tang
(Show Context)
Citation Context ...lti-model and high dimension problem. The appearance difference can be also computed in any feature space such as Gabor feature [5]. Instead of using a native Bayesian classifier, a SVM is trained in =-=[6]-=- to classify the the difference face which is projected and whitened in an intra-person subspace. However, all above Bayesian face methods are generally based on the difference of a given face pair. A... |

15 | Leveraging billions of faces to overcome performance barriers in unconstrained face recognition. arXiv.org, abs/1108.1122
- Taigman, Wolf
- 2011
(Show Context)
Citation Context ...erforms the state of arts supervised methods, through comprehensive comparisons on LFW and WDRef. Our simple system achieved better average accuracy than the current best commercial system (face.com) =-=[16]-=- 4 . – A large dataset (with annotations and extracted low-level features) which is both wide and deep is released. 2 Our Approach: A Joint Formulation In this section, we first present a naive joint ... |

10 | Bayesian Face Recognition Using Gabor Features.
- Wang, T
- 2003
(Show Context)
Citation Context ...ce is obtained. In [4], a random subspace is introduced to handle the multi-model and high dimension problem. The appearance difference can be also computed in any feature space such as Gabor feature =-=[5]-=-. Instead of using a native Bayesian classifier, a SVM is trained in [6] to classify the the difference face which is projected and whitened in an intra-person subspace. However, all above Bayesian fa... |

7 | Subspace analysis using random mixture models
- Wang, Tang
(Show Context)
Citation Context ... three subspaces: intrinsic difference, transformation difference and noise. By excluding the transform difference and noise and retaining the intrinsic difference, better performance is obtained. In =-=[4]-=-, a random subspace is introduced to handle the multi-model and high dimension problem. The appearance difference can be also computed in any feature space such as Gabor feature [5]. Instead of using ... |

6 | A rank-order distance based clustering algorithm for face tagging,” in - Zhu, Wen, et al. - 2011 |

1 |
Q.,Tang, X.,Sun,J.: Face recognition with learning-based descriptor
- Cao, Yin
- 2010
(Show Context)
Citation Context ...tector and rectified by affine transform estimated by five landmarks, i.e. eyes, nose and mouth corners. The landmarks are detected by [23]. We extract two kind of low-level features: LBP [24] and LE =-=[25]-=- in rectified holistic face. Our final data set contains 99,773 images of 2995 people, and 2065 of them have more than 15 images. The large width and depth of this dataset will enable researchers to b... |