• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 5,080
Next 10 →

Table 2: Definitions of the face metrics used in Table 1. The marker positions are taken from a neutral face during initialisation. The second sub- script indicates the component of the marker posi- tion used: x (horizontal) or y (vertical).

in Real-Time Facial Animation for Virtual Characters
by Dennis Burford, Edwin Blake
Cited by 1

Table 2. Detection Accuracy (In Percent) of the Automatic Facial Feature Point Detector

in Detection of Facial Feature Points Using Anthropometric Face Model
by Abu Sayeed, Md. Sohail, Prabir Bhattacharya
"... In PAGE 9: ...summarized in Table2 . As the result indicates, our feature detector performs well both for the neutral face images and well as for the face images having various expressions.... ..."

Table 6. Country neutral domain names distribution

in unknown title
by unknown authors
"... In PAGE 6: ... Table6 shows the proportion of three of these domains against the rest in OECD countries. Facing the risk of being judged in the future, and being wrong, we now present some forecasts.... ..."

Table 1 shows the main features of the effect. Measures are in bold face if they are significantly different from the neutral passage. The first two columns show intensity measures for all points outside pauses. These global measures are higher for fear, anger and happiness than for sad and neutral passages. However, intensity marking is not a simple matter of loudness. ASSESS reveals two types of structure in it.

in Automatic Statistical Analysis of the Signal and Prosodic Signs of Emotion in Speech
by Roddy Cowie, Ellen Douglas-Cowie
"... In PAGE 2: ... Table1 : Selected intensity contrasts between groups. First, note that intensity is normalised.... ..."

Table 1. The observed EER values and their 0.025 and 0.975 quantiles for verification performance of 3D face recognition algorithms. N-N represents performance of a system for the neutral probes only, N-E for the expressive probes only and N-All for all probes.

in Three dimensional face recognition based on geodesic and Euclidean distances
by Shalini Gupta A, Mia K. Markey B, J. K. Aggarwal A, Alan C. Bovik A
"... In PAGE 6: ... Table1 and Table 2 present the equal error rates and the AUC values for the verification performance of 3D face recognition algorithms that were implemented in this study. The corresponding rank 1 recognition rates are presented in Table 3.... ..."

Table 1. The observed EER values and their 0.025 and 0.975 quantiles for verification performance of 3D face recognition algorithms. N-N represents performance of a system for the neutral probes only, N-E for the expressive probes only and N-All for all probes.

in Three dimensional face recognition based on geodesic and Euclidean distances
by Shalini Gupta A, Mia K. Markey B, J. K. Aggarwal A, Alan C. Bovik A
"... In PAGE 6: ... Table1 and Table 2 present the equal error rates and the AUC values for the verification performance of 3D face recognition algorithms that were implemented in this study. The corresponding rank 1 recognition rates are presented in Table 3.... ..."

Table 2 Assessing the attributes of the question asker . Cell

in Boston, Massachusetts USAo April24-28,1994 HumanFactors inComputing Sys[ems!5? Using a Human Face in an Interface
by Janet H. Walkerl, Lee Sproull, R. Subramani

Table 3. The observed rank 1 RR values and their 0.025 and 0.975 quantiles for identification performance of 3D face recognition algorithms. N-N represents performance of a system for the neutral probes only, N-E for the expressive probes only and N-All for all probes.

in Three dimensional face recognition based on geodesic and Euclidean distances
by Shalini Gupta A, Mia K. Markey B, J. K. Aggarwal A, Alan C. Bovik A
"... In PAGE 7: ...Table3 , Figure 3). This trend was observed for the verification as well as the recognition performance of the algorithms.... In PAGE 7: ....0131]; rank 1 RR=92.31%, CI=[90.34 94.27]). Furthermore, the Z PCA algorithm displayed the poorest per- formance of all algorithms tested in this study. The Z LDA algorithm performed significantly better than the Z PCA algorithm (Table 2, Table3 , Figure 4). 4.... ..."

Table 1. Action-Based Face Model

in Feature-Adaptive Motion Energy Analysis for Facial Expression Recognition
by unknown authors
"... In PAGE 3: ... Experiments and discussions are covered in Section 4, and Section 5 draws the conclusion of this paper. 2 Action-Based Face Model Table1 shows our action-based face model, which is based on FACS [3], and the verbal descriptions of facial expressions from DataFace [4, 20]. Table 1.... In PAGE 4: ...Feature-Adaptive Motion Energy Analysis for Facial Expression Recognition 455 Table1 . (continued) Neutral Expressed 8.... In PAGE 6: ... There are five discriminative features but three feature-adaptive motion orientation evaluation methods as shown in Fig. 3, because facial features within the same facial unit shares same action states (See Table1 ). Motion orientation evaluation optimizes the motion intensities of sub-regions of the detected mouth region and proceeds as follows.... ..."

Table 2: Recognition results against a gallery of 112 neutral frontal views. Di erential equations were iterated twice per time unit.

in Recognizing Faces by Dynamic Link Matching
by Laurenz Wiskott, Christoph Von Der Malsburg 1995
"... In PAGE 6: ...Table2 . As is already known from previous work, recognition of depth-rotated faces is less reliable than for instance that of faces with altered expression.... ..."
Cited by 3
Next 10 →
Results 1 - 10 of 5,080
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University