• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 67,212
Next 10 →

Table 1: Recognition rates achieved by the absolute high- level verifiers.

in High-level Verification of Handwritten Numeral Strings
by Oliveira Sabourin Bortolozzi, Ppgia Programa, Pós-graduação Informática Aplicada
"... In PAGE 3: ... The feature set that produced better results was the same one used by the general-purpose recognizer. Table1 shows the recognition rates reached by each absolute high-level verifier. Table 1: Recognition rates achieved by the absolute high- level verifiers.... ..."

Table 1: Recognition rates achieved by the absolute high- level verifiers.

in High-level Verification of Handwritten Numeral Strings
by L. S. Oliveira, R. Sabourin F. Bortolozzi, R. Sabourin, F. Bortolozzi, C. Y. Suen, Pucpr Pontifcia
"... In PAGE 3: ... The feature set that produced better results was the same one used by the general-purpose recognizer. Table1 shows the recognition rates reached by each absolute high-level verifier. Table 1: Recognition rates achieved by the absolute high- level verifiers.... ..."

Table 3. Results of technique extended to 0th and 1st order Gaussian derivatives in chrominance channels. High recognition rates are obtained for all objects. Average results are slightly superior than those in section 5.2.

in Object Recognition using Coloured Receptive Fields
by Daniela Hall, Vincent Colin de Verdière, James L. Crowley

Table 2, 2 results. Figure 2 shows the recognition rates obtained with SG-MRF, for different levels of noise, with 50% of noisy views into the training set. We see that until a certain noise level ( =50), SG-MRF has a very high recognition rate and at the same time a great stability. As the level of noise increases, SG-MRF starts to lose stability.

in Robust Appearance-based Object Recognition using a Fully Connected Markov Random Field
by unknown authors
"... In PAGE 3: ... Due to space constraints, we present just the most significant results. Table2 shows the recognition rates obtained for the noise level of =10, three different percentage of noisy views in the training set, three different percentage of noisy views in the test set, using SG-MRF and 2. We see that robust- ness to noise improves dramatically when noisy views are included in the training data.... In PAGE 3: ... This last result, with the results obtained using 2, show particularly that the increased robustness is due to the generalization capabilities of SG-MRF, and not to a similarity effect due to the presence of similarly de- graded views in the training and test set. If this would be the case, we should observe in the last row of Table2 , SG- -MRF results, more or less the same value we observe in the first row, in reversed order. This is instead observed in Table 2, 2 results.... In PAGE 3: ... Table2 . Classification results using SG-MRF (top) and 2 (bottom) for different percentage of noisy views in the training and testing set; noise level =10.... ..."

(Table 5). With only 100 high discriminative kernels recognition rate was still at 93%, with 40 kernels it was 90%. When only 10 high informative kernels were kept in the representation recognition rate was still up to 73%, whereas if we choose low informative kernels the recognition rate dropped to 32%. This shows the usefulness of statistical analysis of kernel activation values for data compression.

in Statistical Analysis of Gabor-filter Representation," presented at
by Peter Kalocsai, Hartmut Neven, Johannes Steffens 1998
Cited by 4

Table 2 Discriminative power of different poses: A comparison of identification statistics for recog- nition using each of the pose-specific cluster distances separately and the proposed method for combining them using an RBF-based neural network. In addition to the expected perfor- mance improvement when using all over only some poses, it is interesting to note different contributions of side and frontal pose clusters, the former being more discriminative in the context of the proposed method.

in A Face Recognition System for Access Control Using Video
by Ognjen Arandjelović, Roberto Cipolla
"... In PAGE 30: ...4. Table2 show a summary of the results. High recognition rates were... ..."

Table 1. Performance of 19 classifiers trained on 70% and cross-validated on 30% of a large sound database. The mean recognition rate indicates high recognizer performance across all the models..

in General Sound Classification and Similarity in
by Merl Cambridge
"... In PAGE 10: ... Each sound in the test set was presented to all 19 models in parallel, the HMM with the maximum likelihood score, using a method called Viterbi decoding, was selected as the representative class for the test sound; see Figure 7. Maximum Likelihood Model Selection STATE PATH MATCHING MPEG-7 SOUND DATABASE RESULT LIST SoundModelStatePath SoundRecognitionClassifier HMM 2 HMM 1 HMM N-1 HMM N MODEL REF +STATE PATH HMM AND BASIS AUDIO QUERY SPECTRUM PROJECTION SoundRecognitionModel AudioSpectrumBasis ContinuousMarkovModel Indexed Audio Classification Application Query-by-Example Application Figure 7: Sound Classification and Similarity System Using Parallel HMMs The results of classification performance on testing data are shown in Table1 . The results indicate very good recognizer performance across a broad range of sound classes.... ..."

Table 3 shows the average correct recognition rates of the four classifiers. The LDA classifier achieves the high- est correct recognition rate with an accuracy of 83.6%. The confusion matrix of the average case for the LDA classi- fier is shown in Table 4. The expressions of Happiness and Surprise are well identified with accuracies of 95.0% and 90.8%, respectively. Anger, Sadness, Fear and Disgust have comparatively lower recognition rates. The misclassi- fication rate between Anger and Sadness is around 19.6% (= 8.3%+11.3%), while that between Fear with Happiness is around 16.3%.

in 3D facial expression recognition based on primitive surface feature distribution
by Jun Wang, Lijun Yin, Xiaozhou Wei, Yi Sun
"... In PAGE 6: ...7% 77.8% Table3 . Results of person-independent ex- pression classification using the 3D-PSFD method.... In PAGE 6: ... Results of person-independent ex- pression classification using GW and TC. Comparing to the performance shown in Table3 , the 3D-PSFD method is superior to the 2D appearance feature based methods when classifying the six prototypic facial ex- pressions. 5.... ..."
Cited by 1

Table 2. Recognition rate on the various fonts Fonts Test data size

in Recognition of printed Amharic documents
by Million Meshesha, C. V. Jawahar 2005
"... In PAGE 4: ... We consider an average of more than 7500 samples for the experiment. Results are presented in Table2 . The recognition rate is high for all fonts.... ..."
Cited by 3

Table 3 Recognition rates

in Hierarchical overlapped neural gas network with application to pattern classification
by Ajantha S. Atukorale, P. N. Suganthan 2000
"... In PAGE 10: ...As a result of the above feature extraction method, we were left with a feature vector of 99 elements. The recognition rates obtained using the above parameters are illustrated in Table3 . As can be seen, the HONG architecture improves further on the high classi quot;cation rate provided by the base NG network.... ..."
Cited by 1
Next 10 →
Results 1 - 10 of 67,212
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University