• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 30,784
Next 10 →

Table 3. Dependence of N-gram conflation performance on the N-gram order N-gram order 11-pt Precision Average 11-pt Precision Average

in Evaluation of N-Grams Conflation Approach in Text-Based Information Retrieval
by Serhiy Kosinov 2001
"... In PAGE 6: ...rams, i.e. the value of N, are shown in Table 3. It is important to note, that when interpreting the figures given in Table3 , one needs to consider the possible influence of the selected AHC algorithm cutoff value. The diagram on Figure 3 clarifies this issue.... ..."
Cited by 3

Table 1: Size of the stochastic language models for different n-gram order and shrinking factor.

in Statistical Modeling for Unit Selection in Speech Synthesis
by Cyril Allauzen, Mehryar Mohri, Michael Riley
Cited by 1

Table 1: Size of the stochastic language models for different n-gram order and shrinking factor.

in Statistical Modeling for Unit Selection in Speech Synthesis
by Cyril Allauzen, Mehryar Mohri, Michael Riley
Cited by 1

Table 5.2. Varying MTU n-gram model order.

in Machine Translation
by unknown authors

Table 5. Fingerprinting analysis results for each volume of proceedings (excerpt). n-grams are listed in decreasing frequency order.

in Table of Contents Workshop Summary...................................................................................................... 3
by Björn Regnell, Erik Kamsties, Vincenzo Gervasi, Björn Regnell, Erik Kamsties, Vincenzo Gervasi, Anniversary Sessions 2004
"... In PAGE 67: ...Specificity results Table5 shows an excerpt of the fingerprint lists for each year, that is, n-grams that appear significantly more frequently than the other n-grams in the contents of the proceedings for that year. As could be expected, many terms that were significant for the entire corpus are also significant for several volumes: for ex- ample, requirements is the most frequent term in 9 out of 10 volumes (the exception being 1994, the first REFSQ, for which, however, we have incomplete data).... ..."

Table 3. N-gram Substitution Rules

in TRANSLITERATED ARABIC NAME SEARCH
by unknown authors
"... In PAGE 3: ...Table3 defines the rules used in this work. Rule Rank is the precedence for applying rules with the most significant digit delineating groups of rules executed sequentially until no more modifications to the enhanced name occur.... In PAGE 4: ... Rules apply to prefixes, suffixes, or non-position sensitive n-grams. Table3 shows the substitutions, which are applied in order to the enhanced names. Each rule rank, 1 through 5 is executed multiple times until no new substitutions occur.... ..."

Table 3: Test set perplexities with cluster-based higher-order n-gram modelsG3

in The Use of Clustering Techniques for Language Modeling - Application to Asian Languages
by Jianfeng Gao, Joshua T. Goodman, Jiangbo Miao
"... In PAGE 10: ... The cluster-based higher-order n-gram models were then linearly interpolated with normal word-based trigram models. The perplexity results are shown in Table3 . We can see that although we used training corpora of medial size, improvement still occurred even into very high order n-gram models.... ..."

Table 3: Test set perplexities with cluster-based higher-order n-gram modelsG3

in The Use of Clustering Techniques for Asian Language Modeling
by Jianfeng Gao, Joshua T. Goodman, Jiangbo Miao
"... In PAGE 10: ... The cluster-based higher-order n-gram models were then linearly interpolated with normal word-based trigram models. The perplexity results are shown in Table3 . We can see that although we used training corpora of medial size, improvement still occurred even into very high order n-gram models.... ..."

Table 1: Perplexity for Turkish language models. N = n-gram order, Word = word-based models, Hand = manual search, Rand = random search, GA = genetic search.

in Automatic Learning of Language Model Structure
by unknown authors
"... In PAGE 6: ... tors in the Turkish word representation, models were only optimized for conditioning variables and backo paths, but not for smoothing op- tions. Table1 compares the best perplexity re- sults for standard word-based models and for FLMs obtained using manual search (Hand), random search (Rand), and GA search (GA). The last column shows the relative change in perplexity for the GA compared to the better of the manual or random search models.... ..."

Table 3: Results for pruned system for different N-gram order combinations. Significance is evaluated in a McNemar matched pairs test, at 95% confidence. Score-level combination results are for best found combination of systems (a) through (g), using combiner in Section 2.8.

in Corresponding author:
by E. Shriberg A, B L. Ferrer A, Dr. Elizabeth Shriberg
"... In PAGE 16: ...90% EER. Clearly more work can be done along these lines, but to keep things simple, for all further analyses in this paper we use the best-performing system from Table3 , i.... ..."
Next 10 →
Results 1 - 10 of 30,784
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University