### Table 1. Final (Posterior) Probability of the Null Hypothesis after Observing Various Bayes Factors, as a Function of the Prior Probability of the Null Hypothesis

1999

"... In PAGE 2: ... So it is with the Bayes factor: It modifies prior probabilities, and after seeing how much Bayes factors of certain sizes change various prior probabilities, we begin to understand what repre- sents strong evidence, and weak evidence. Table1 shows us how far various Bayes factors move prior probabilities, on the null hypothesis, of 90%, 50%, and 25%. These correspond, respective- ly, to high initial confidence in the null hypothesis, equivocal confidence, and moderate suspicion that the null hypothesis is not true.... ..."

Cited by 4

### Table 4.1. Performance of various hypothesis fusion algorithms on the RM database. MFCC and PLP features were used. Hypo-Comb represents the hypothesis combination algorithm, Lat-Comb is our lattice combination algorithm.

2005

### Table 4.3. Word Error Rates obtained for various hypothesis combination algorithms using the TID database for two SNRs, 5 dB and 10 dB. Hypotheses were combined from results obtained using standard MFCC and PLP features. Hypo-Comb represents the hypothesis combination algorithm, Lat-Comb is our lattice combination algorithm.

2005

### Table 4.4. Word Error Rate performance of various hypothesis combination algorithms using the SPINE 1 and SPINE 2 databases. The parallel features used for SPINE 1 were MFCC features with two different DCT implementations. The parallel features used for SPINE 2 were two different LDA features that were designed at discriminating among different phoneme classes. Hypo-Comb represents the hypothesis combination algorithm, Lat-Comb is our lattice combination algorithm.

2005

### Table 3: Word error rate on Hub-4E-97-subset for various pruning methods using full (27 Hypothesis) decoding parameters.

1999

"... In PAGE 2: ...5 times that taken by the baseline) without any increase in WER relative to the unpruned lexicon. The results in Table3 show that gains provided by the log-count pruning scheme carry over to the wider beam decoding condition. A lexicon pruned using this second scheme was therefore selected for use in the SPRACH 98 sys- tem; we found that the modest improvements from this lexicon were duplicated across test sets (including the full 1997 Hub4 Evaluation) and with different acoustic models.... ..."

Cited by 9

### Table 3: Word error rate on Hub-4E-97-subset for various pruning methods using full (27 Hypothesis) decoding parameters.

1999

"... In PAGE 2: ...5 times that taken by the baseline) without any increase in WER relative to the unpruned lexicon. The results in Table3 show that gains provided by the log-count pruning scheme carry over to the wider beam decoding condition. A lexicon pruned using this second scheme was therefore selected for use in the SPRACH 98 sys- tem; we found that the modest improvements from this lexicon were duplicated across test sets (including the full 1997 Hub4 Evaluation) and with different acoustic models.... ..."

Cited by 9

### Table 3: Word error rate on Hub-4E-97-subset for various pruning methods using full (27 Hypothesis) decoding parameters.

1999

"... In PAGE 2: ...5 times that taken by the baseline) without any increase in WER relative to the unpruned lexicon. The results in Table3 show that gains provided by the log-count pruning scheme carry over to the wider beam decoding condition. A lexicon pruned using this second scheme was therefore selected for use in the SPRACH 98 sys- tem; we found that the modest improvements from this lexicon were duplicated across test sets (including the full 1997 Hub4 Evaluation) and with different acoustic models.... ..."

Cited by 9

### Table 4. % WER of various bnac NB acoustic models. WB segments hypothesis using the RT03 WB MPE model.

2005

"... In PAGE 3: ... WB segments hypothesis using the RT03 WB MPE model. Table4 presents the performance of NB models trained using various methods. The MPE-SPR with ML priors showed slightly poorer performance than the full MPE trained NB models, but an- other iteration of MPE training (with ML prior) gave 0.... ..."

Cited by 3

### Table 5.1. Recognition accuracy of various hypothesis fusion and probability combination schemes for various experiments using the RM, TID and WSJ0 databases. Hyp, Lat, and Prob refer to hypothesis combination, lattice combination and synchronous probability combination, respectively. Max, Sum, Prod, MaxW, MaxW entropy, SumW, and SumW entropy refer to equal-weighted maximization, equal-weighted summation, multiplication, weighted maximization with loss-based training, weighted maximization with entropy based weights, weighted summation with loss-based training, and weighted summation with entropy based weights as the combination functions, respectively. Results were obtained by direct combining tied states of HMMs that share the same state tying patterns.

2005

### Table 1: Null Hypotheses Hypothesis Restriction

"... In PAGE 4: ... Contrary to KPSS and LBM we will not only consider the null hypothesis of trend stationarity, but also those of level stationarity and zero mean stationarity. The corresponding parameter restrictions are tabulated in Table1 . Our statistic di ers slightly for the various null hypotheses and so does its asymptotic distribution.... ..."