### Table 4: WER after 100-best Rescoring: the 100-best list, gen- erated by a trigram model with Good-Turing discounting and Katz back-off, has a baseline WER of 38.5%.

2001

"... In PAGE 4: ... The tri- gram LM parameters are as estimated above, and we use EM3 parameters for the SLMs. As Table4 demonstrates, Kneser- Ney smoothing methods improve slightly but consistently over... ..."

Cited by 2

### Table 4: WER after 100-best Rescoring: the 100-best list, gen- erated by a trigram model with Good-Turing discounting and Katz back-off, has a baseline WER of 38.5%.

2001

"... In PAGE 4: ... The tri- gram LM parameters are as estimated above, and we use EM3 parameters for the SLMs. As Table4 demonstrates, Kneser- Ney smoothing methods improve slightly but consistently over... ..."

Cited by 2

### Table 3: Experimental results showing CPU times to com- puting optimal ll amounts and ll generation. 6 Acknowledgments We thank Larry Camilletti and Duane Boning for enlighten- ing discussions. We also gratefully acknowledge a software donation from Artwork Conversions, Inc. References [1] R. Bek, C. C. Lin and J. H. Liu, personal communication, De- cember 1997.

"... In PAGE 5: ... In our experiments we assume that U and Ut are the maximum window and tile density, respectively. In Table3 , the runtimes for preparing the min-variation LP formulation and solving resulting LP are given separately. The achieved minimum density for xed r-dissection win- dows (M) is also reported.... ..."

### Table 4. Overall score for BaySpell using di erent smoothing methods. The last method, inter- polative smoothing, is the one presented here. Training was on 80% of Brown and testing on the other 20%. When using MLE likelihoods, we broke ties by choosing the word with the largest prior (ties arose when all words had probability 0.0). For Katz smoothing, we used absolute dis- counting (Ney et al., 1994), as Good-Turing discounting resulted in invalid discounts for our task. For Kneser-Ney smoothing, we used absolute discounting and the backo distribution based on the \marginal constraint quot;. For interpolation with a xed , Katz, and Kneser-Ney, we set the necessary parameters separately for each word Wi using deleted estimation. Smoothing method Reference Overall

1999

"... In PAGE 18: ... However, we investigated this brie y by comparing the performance of BaySpell with interpolative smoothing to its performance with MLE likelihoods (the naive method), as well as a number of alternative smoothing methods. Table4 gives the overall scores. While the overall score for BaySpell with interpolative smoothing was 93.... ..."

Cited by 57

### Table 4. Overall score for BaySpell using di erent smoothing methods. The last method, inter- polative smoothing, is the one presented here. Training was on 80% of Brown and testing on the other 20%. When using MLE likelihoods, we broke ties by choosing the word with the largest prior (ties arose when all words had probability 0.0). For Katz smoothing, we used absolute dis- counting (Ney et al., 1994), as Good-Turing discounting resulted in invalid discounts for our task. For Kneser-Ney smoothing, we used absolute discounting and the backo distribution based on the \marginal constraint quot;. For interpolation with a xed , Katz, and Kneser-Ney, we set the necessary parameters separately for each word Wi using deleted estimation. Smoothing method Reference Overall

1999

"... In PAGE 20: ... However, we investigated this brie y by comparing the performance of BaySpell with interpolative smoothing to its performance with MLE likelihoods (the naive method), as well as a number of alternative smoothing methods. Table4 gives the overall scores. While the overall score for BaySpell with interpolative smoothing was 93.... ..."

Cited by 57

### Table 1: Perplexities reported by Katz and N adas on 100-sentence test set for three di erent smoothing algorithms where V is the vocabulary, the set of all words being considered.2 Let us reconsider the previous example using this new distribution, and let us take our vocabulary V to be the set of all words occurring in the training data S, so that we have jV j = 11. For the sentence John read a book, we now have p(John read a book)

1998

Cited by 370

### Table 8.1: Time (in sec) needed to compute predictions for all the non-watched movies and all the users. We observe that the slowest methods are the distance-based scoring algorithms (i.e., CT, PCA CT, One-way, and Return) and the cos+ method. The fastest scoring algorithms (if we do not consider the MaxF algorithm which provides nearly imme- diate results) are L+, Katz, and RFA.

2005

### Table 1. Proven lower bounds on security in the random-oracle model relative to roots (for RSA) or factorization (for Rabin/Williams). 1996 Bellare/Rogaway proved tight security for RSA and outlined a proof for unstructured Rabin/Williams, but specifically prohibited principal Rabin/ Williams and required large B. 1999 Kurosawa/Ogata claimed tight security for principal B = 0 Rabin/Williams (starred entries in the table), but the Kurosawa/Ogata proof has a fatal flaw and the theorem appears unsalvageable. 2003 Katz/Wang introduced a new proof allowing B as small as 1 for claw-free permutation pairs, but claw-free permutation pairs are not general enough to cover Rabin/Williams. This paper generalizes the Katz/Wang idea to cover Rabin/ Williams, and introduces a new security proof covering fixed unstructured B = 0 Rabin/Williams.

2008

Cited by 3

### Table 3 suggests that using a window of a maximum of 6 words is sufficient to cover most cases of complex terminological units. Although this can lead to the exclusion of some terms from the final list of CTs, as we stated before, we prefer to limit the recall of the system in order to increase its precision. The potential structure of CTs that will be retrieved by TermoStat can be described using the following regular expression:

"... In PAGE 6: ...ircuit/NN packs/NNS ./. Previous term extraction studies suggested that the length of candidates retrieved by a system should be limited. Table3 presents the breakdown of the observations done by Justeson and Katz (1993) and Nkwenti-Azeh (1994). Table 3.... In PAGE 6: ... Table 3 presents the breakdown of the observations done by Justeson and Katz (1993) and Nkwenti-Azeh (1994). Table3 . Previous studies on the length of terms and CTs 1 word 2 words 3 words 4 words 5+ words Justeson and Katz 29.... ..."

### Table4: Perplexityresultsusingdifferentbigramandtrigramcutoffs.

1997

"... In PAGE 3: ....6. n-Gram Cutoff Optimization Severaldifferent bigram andtrigram cutoff combinationsweretested for the language model with Katz smoothing. Perplexity results are shown in Table4 . Neither singleton trigrams nor singleton bigrams... ..."

Cited by 2