Results 1 - 10
of
449
The science of emotional intelligence
, 2005
"... This article presents a framework for emotiolllJl intelligenCl!, a set of skills hypothesized to contribute to the accurate appraisal and expression of emotion in oneself and in others, the effective regulation of emotion in self and others, and the use of feelings to motivate, plan, and achieve in ..."
Abstract
-
Cited by 887 (38 self)
- Add to MetaCart
(Show Context)
This article presents a framework for emotiolllJl intelligenCl!, a set of skills hypothesized to contribute to the accurate appraisal and expression of emotion in oneself and in others, the effective regulation of emotion in self and others, and the use of feelings to motivate, plan, and achieve in one's life. We start by reviewing the debate about the adaptive versus maladaptive qualities of emotion. We then explore the literature on intelligence, and especiaUy social intelligence. to examine the place of emotion in traditional intelligence conceptions. A framework for integrating the research on emotion-related snUs Is then described. Next, we review the components of emotional intelligence. To conclude the review. the role of emotional intelligence in mental health is discussed and avenues for further investigation are suggested. Is "emotional intelligence " 8 contradiction in terms? One tradition in Western thought has viewed emotions as disorganized interruptions of mental activity, so potentially disruptive that they must be controlled. Writing in the first century B.C., Publilius Syrus stated, "Rule your feelings, lest your feelings rule you " [1}.
Imagenet: A large-scale hierarchical image database
- In CVPR
, 2009
"... The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce her ..."
Abstract
-
Cited by 840 (28 self)
- Add to MetaCart
(Show Context)
The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a largescale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond. 1.
Attribute and Simile Classifiers for Face Verification
- In IEEE International Conference on Computer Vision (ICCV
, 2009
"... We present two novel methods for face verification. Our first method – “attribute ” classifiers – uses binary classifiers trained to recognize the presence or absence of describable aspects of visual appearance (e.g., gender, race, and age). Our second method – “simile ” classifiers – removes the ma ..."
Abstract
-
Cited by 325 (14 self)
- Add to MetaCart
(Show Context)
We present two novel methods for face verification. Our first method – “attribute ” classifiers – uses binary classifiers trained to recognize the presence or absence of describable aspects of visual appearance (e.g., gender, race, and age). Our second method – “simile ” classifiers – removes the manual labeling required for attribute classification and instead learns the similarity of faces, or regions of faces, to specific reference people. Neither method requires costly, often brittle, alignment between image pairs; yet, both methods produce compact visual descriptions, and work on real-world images. Furthermore, both the attribute and simile classifiers improve on the current state-of-the-art for the LFW data set, reducing the error rates compared to the current best by 23.92 % and 26.34%, respectively, and 31.68 % when combined. For further testing across pose, illumination, and expression, we introduce a new data set – termed PubFig – of real-world images of public figures (celebrities and politicians) acquired from the internet. This data set is both larger (60,000 images) and deeper (300 images per individual) than existing data sets of its kind. Finally, we present an evaluation of human performance. 1.
Building high-level features using large scale unsupervised learning
- In International Conference on Machine Learning, 2012. 103
"... We consider the problem of building highlevel, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a 9-layered locally connected sparse autoencoder withpoolingandlocalcontrastnormalizati ..."
Abstract
-
Cited by 180 (9 self)
- Add to MetaCart
(Show Context)
We consider the problem of building highlevel, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a 9-layered locally connected sparse autoencoder withpoolingandlocalcontrastnormalization on a large dataset of images (the model has 1 billion connections, the dataset has 10 million 200x200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a cluster with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containingafaceornot. Controlexperiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bodies. Starting with these learned features, we trained our network to obtain 15.8 % accuracy in recognizing 20,000 object categories from ImageNet, a leap of 70 % relative improvement over the previous state-of-the-art.
RASL: Robust Alignment by Sparse and Low-rank Decomposition for Linearly Correlated Images
, 2010
"... This paper studies the problem of simultaneously aligning a batch of linearly correlated images despite gross corruption (such as occlusion). Our method seeks an optimal set of image domain transformations such that the matrix of transformed images can be decomposed as the sum of a sparse matrix of ..."
Abstract
-
Cited by 161 (6 self)
- Add to MetaCart
(Show Context)
This paper studies the problem of simultaneously aligning a batch of linearly correlated images despite gross corruption (such as occlusion). Our method seeks an optimal set of image domain transformations such that the matrix of transformed images can be decomposed as the sum of a sparse matrix of errors and a low-rank matrix of recovered aligned images. We reduce this extremely challenging optimization problem to a sequence of convex programs that minimize the sum of ℓ1-norm and nuclear norm of the two component matrices, which can be efficiently solved by scalable convex optimization techniques with guaranteed fast convergence. We verify the efficacy of the proposed robust alignment algorithm with extensive experiments with both controlled and uncontrolled real data, demonstrating higher accuracy and efficiency than existing methods over a wide range of realistic misalignments and corruptions.
Is that you? Metric learning approaches for face identification
- In ICCV
, 2009
"... Face identification is the problem of determining whether two face images depict the same person or not. This is difficult due to variations in scale, pose, lighting, background, expression, hairstyle, and glasses. In this paper we present two methods for learning robust distance measures: (a) a log ..."
Abstract
-
Cited by 159 (8 self)
- Add to MetaCart
(Show Context)
Face identification is the problem of determining whether two face images depict the same person or not. This is difficult due to variations in scale, pose, lighting, background, expression, hairstyle, and glasses. In this paper we present two methods for learning robust distance measures: (a) a logistic discriminant approach which learns the metric from a set of labelled image pairs (LDML) and (b) a nearest neighbour approach which computes the probability for two images to belong to the same class (MkNN). We evaluate our approaches on the Labeled Faces in the Wild data set, a large and very challenging data set of faces from Yahoo! News. The evaluation protocol for this data set defines a restricted setting, where a fixed set of positive and negative image pairs is given, as well as an unrestricted one, where faces are labelled by their identity. We are the first to present results for the unrestricted setting, and show that our methods benefit from this richer training data, much more so than the current state-of-the-art method. Our results of 79.3 % and 87.5 % correct for the restricted and unrestricted setting respectively, significantly improve over the current state-of-the-art result of 78.5%. Confidence scores obtained for face identification can be used for many applications e.g. clustering or recognition from a single training example. We show that our learned metrics also improve performance for these tasks. 1.
Towards a Practical Face Recognition System: Robust Alignment and Illumination by Sparse Representation
, 2010
"... Many classic and contemporary face recognition algorithms work well on public data sets, but degrade sharply when they are used in a real recognition system. This is mostly due to the difficulty of simultaneously handling variations in illumination, image misalignment, and occlusion in the test imag ..."
Abstract
-
Cited by 108 (10 self)
- Add to MetaCart
Many classic and contemporary face recognition algorithms work well on public data sets, but degrade sharply when they are used in a real recognition system. This is mostly due to the difficulty of simultaneously handling variations in illumination, image misalignment, and occlusion in the test image. We consider a scenario where the training images are well controlled, and test images are only loosely controlled. We propose a conceptually simple face recognition system that achieves a high degree of robustness and stability to illumination variation, image misalignment, and partial occlusion. The system uses tools from sparse representation to align a test face image to a set of frontal training images. The region of attraction of our alignment algorithm is computed empirically for public face datasets such as Multi-PIE. We demonstrate how to capture a set of training images with enough illumination variation that they span test images taken under uncontrolled illumination. In order to evaluate how our algorithms work under practical testing conditions, we have implemented a complete face recognition system, including a projector-based training acquisition system. Our system can efficiently and effectively recognize faces under a variety of realistic conditions, using only frontal images under the proposed illuminations as training.
Face recognition with learning-based Descriptor
- In Proc. IEEE CVPR
, 2010
"... We present a novel approach to address the representa-tion issue and the matching issue in face recognition (verifi-cation). Firstly, our approach encodes the micro-structures of the face by a new learning-based encoding method. Un-like many previous manually designed encoding methods (e.g., LBP or ..."
Abstract
-
Cited by 104 (13 self)
- Add to MetaCart
(Show Context)
We present a novel approach to address the representa-tion issue and the matching issue in face recognition (verifi-cation). Firstly, our approach encodes the micro-structures of the face by a new learning-based encoding method. Un-like many previous manually designed encoding methods (e.g., LBP or SIFT), we use unsupervised learning tech-niques to learn an encoder from the training examples, which can automatically achieve very good tradeoff be-tween discriminative power and invariance. Then we ap-ply PCA to get a compact face descriptor. We find that a simple normalization mechanism after PCA can further im-prove the discriminative ability of the descriptor. The re-sulting face representation, learning-based (LE) descriptor, is compact, highly discriminative, and easy-to-extract. To handle the large pose variation in real-life scenar-ios, we propose a pose-adaptive matching method that uses pose-specific classifiers to deal with different pose combi-nations (e.g., frontal v.s. frontal, frontal v.s. left) of the matching face pair. Our approach is comparable with the state-of-the-art methods on the Labeled Face in Wild (LFW) benchmark (we achieved 84.45 % recognition rate), while maintaining excellent compactness, simplicity, and gener-alization ability across different datasets.
L.: Deepface: Closing the gap to human-level performance in face verification
- In: IEEE CVPR
, 2014
"... In modern face recognition, the conventional pipeline consists of four stages: detect ⇒ align ⇒ represent ⇒ clas-sify. We revisit both the alignment step and the representa-tion step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face represe ..."
Abstract
-
Cited by 103 (4 self)
- Add to MetaCart
(Show Context)
In modern face recognition, the conventional pipeline consists of four stages: detect ⇒ align ⇒ represent ⇒ clas-sify. We revisit both the alignment step and the representa-tion step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight shar-ing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an iden-tity labeled dataset of four million facial images belong-ing to more than 4,000 identities. The learned representa-tions coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.25 % on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 25%, closely approach-ing human-level performance. 1.