Results 1 - 10
of
30,784
Table 3. Dependence of N-gram conflation performance on the N-gram order N-gram order 11-pt Precision Average 11-pt Precision Average
2001
"... In PAGE 6: ...rams, i.e. the value of N, are shown in Table 3. It is important to note, that when interpreting the figures given in Table3 , one needs to consider the possible influence of the selected AHC algorithm cutoff value. The diagram on Figure 3 clarifies this issue.... ..."
Cited by 3
Table 1: Size of the stochastic language models for different n-gram order and shrinking factor.
Cited by 1
Table 1: Size of the stochastic language models for different n-gram order and shrinking factor.
Cited by 1
Table 5. Fingerprinting analysis results for each volume of proceedings (excerpt). n-grams are listed in decreasing frequency order.
2004
"... In PAGE 67: ...Specificity results Table5 shows an excerpt of the fingerprint lists for each year, that is, n-grams that appear significantly more frequently than the other n-grams in the contents of the proceedings for that year. As could be expected, many terms that were significant for the entire corpus are also significant for several volumes: for ex- ample, requirements is the most frequent term in 9 out of 10 volumes (the exception being 1994, the first REFSQ, for which, however, we have incomplete data).... ..."
Table 3. N-gram Substitution Rules
"... In PAGE 3: ...Table3 defines the rules used in this work. Rule Rank is the precedence for applying rules with the most significant digit delineating groups of rules executed sequentially until no more modifications to the enhanced name occur.... In PAGE 4: ... Rules apply to prefixes, suffixes, or non-position sensitive n-grams. Table3 shows the substitutions, which are applied in order to the enhanced names. Each rule rank, 1 through 5 is executed multiple times until no new substitutions occur.... ..."
Table 3: Test set perplexities with cluster-based higher-order n-gram modelsG3
"... In PAGE 10: ... The cluster-based higher-order n-gram models were then linearly interpolated with normal word-based trigram models. The perplexity results are shown in Table3 . We can see that although we used training corpora of medial size, improvement still occurred even into very high order n-gram models.... ..."
Table 3: Test set perplexities with cluster-based higher-order n-gram modelsG3
"... In PAGE 10: ... The cluster-based higher-order n-gram models were then linearly interpolated with normal word-based trigram models. The perplexity results are shown in Table3 . We can see that although we used training corpora of medial size, improvement still occurred even into very high order n-gram models.... ..."
Table 1: Perplexity for Turkish language models. N = n-gram order, Word = word-based models, Hand = manual search, Rand = random search, GA = genetic search.
"... In PAGE 6: ... tors in the Turkish word representation, models were only optimized for conditioning variables and backo paths, but not for smoothing op- tions. Table1 compares the best perplexity re- sults for standard word-based models and for FLMs obtained using manual search (Hand), random search (Rand), and GA search (GA). The last column shows the relative change in perplexity for the GA compared to the better of the manual or random search models.... ..."
Table 3: Results for pruned system for different N-gram order combinations. Significance is evaluated in a McNemar matched pairs test, at 95% confidence. Score-level combination results are for best found combination of systems (a) through (g), using combiner in Section 2.8.
"... In PAGE 16: ...90% EER. Clearly more work can be done along these lines, but to keep things simple, for all further analyses in this paper we use the best-performing system from Table3 , i.... ..."
Results 1 - 10
of
30,784