### Table 2: Generalization accuracies and bit rates for fully Bayesian method

2003

"... In PAGE 5: ...airings of the study reported in section 2.1. We compare the generalization accuracies of the fully probabilistic model (full Bayes) with those of a classi- cal approach that does feature extraction separately. Table2 also shows in brackets the probabilities of the null hypothesis that the result of one method are equal to the method in the previous column. We may thus conclude that a fully Bayesian approach significantly outperforms classifications obtained when conditioning on feature estimates.... ..."

### Table 4: Bayes estimates of factor loadings. 1 2 3 4

1997

"... In PAGE 13: ... These are quite simple, being Student t, Student t for given F , and Inverted Wishart given F , and , respectively. For example, note that the credibility intervals for the rst, second and fourth factor loadings corresponding to the last row of Table4 include the origin. A commonly used Bayesian hypothesis testing procedure suggests that we should therefore conclude that we cannot reject the hypothesis that these three factor loadings are zero.... ..."

Cited by 1

### Table 2: Bayesian model averaging, Bayesian model selection, and constrain-based results for an analysis of whether \X causes Z quot; given data summarized in Table 1. number of output of output of

1997

"... In PAGE 11: ... Table 1: A summary of data used in the example. number su cient statistics of cases x y z x yz xy z xyz x y z x yz xy z xyz 150 5 36 38 15 7 16 23 10 250 10 60 51 27 15 25 41 21 500 23 121 103 67 19 44 79 44 1000 44 242 222 152 51 80 134 75 2000 88 476 431 311 105 180 264 145 The rst two columns in Table2 shows the results of applying Equation 4 under the assumptions stated above for the rst N cases in the data set. When N = 0, the data set is empty, in which case probability of hypothesis h is just the prior probability of \X causes Z quot;: 8/25=0.... In PAGE 11: ...32. Table2 shows that as the number of cases in the database increases, the probability that \X causes Z quot; increases monotonically as the number of cases increases. Although not shown, the probability increases toward 1 as the number of cases increases beyond 2000.... In PAGE 11: ... Although not shown, the probability increases toward 1 as the number of cases increases beyond 2000. Column 3 in Table2 shows the results of applying Bayesian model selection. Here, we list the causal relationship(s) between X and Z found in the model or models with the highest posterior probability p(mjD).... In PAGE 11: ... Two of the models have Z as a cause of X; and one has X as a cause of Z. Column 4 in Table2 shows the results of applying the PC constraint-based causal discov- ery algorithm (Spirtes et al., 1993), which is part of the Tetrad II system (Scheines et al.... ..."

Cited by 54

### Table 2: Bayesian model averaging, Bayesian model selection, and constrain-based results for an analysis of whether \X causes Z quot; given data summarized in Table 1. number of output of output of

1997

"... In PAGE 11: ... Table 1: A summary of data used in the example. number su cient statistics of cases x y z x yz xy z xyz x y z x yz xy z xyz 150 5 36 38 15 7 16 23 10 250 10 60 51 27 15 25 41 21 500 23 121 103 67 19 44 79 44 1000 44 242 222 152 51 80 134 75 2000 88 476 431 311 105 180 264 145 The rst two columns in Table2 shows the results of applying Equation 4 under the assumptions stated above for the rst N cases in the data set. When N = 0, the data set is empty, in which case probability of hypothesis h is just the prior probability of \X causes Z quot;: 8/25=0.... In PAGE 11: ...32. Table2 shows that as the number of cases in the database increases, the probability that \X causes Z quot; increases monotonically as the number of cases increases. Although not shown, the probability increases toward 1 as the number of cases increases beyond 2000.... In PAGE 11: ... Although not shown, the probability increases toward 1 as the number of cases increases beyond 2000. Column 3 in Table2 shows the results of applying Bayesian model selection. Here, we list the causal relationship(s) between X and Z found in the model or models with the highest posterior probability p(mjD).... In PAGE 11: ... Two of the models have Z as a cause of X; and one has X as a cause of Z. Column 4 in Table2 shows the results of applying the PC constraint-based causal discov- ery algorithm (Spirtes et al., 1993), which is part of the Tetrad II system (Scheines et al.... ..."

Cited by 54

### Table 6: Comparison of Bayesian scores for all experiments in the ALARM domain COMP C B CB CM BM CBM

1997

"... In PAGE 18: ... For each experiment in the ALARM domain (Tables 3, 4, and 5) the values presented measure the performance of search relative to the worst performance in that experiment. In Table6 , we compare search performance across all experiments in the ALARM domain. That is, a value of zero in the table corresponds to the experiment and set of operators that led to the learned hypothesis with lowest posterior probability, out of all experiments and operator restrictions we considered in the ALARM domain.... ..."

Cited by 116

### Table 1: Error Rates (percentage) of Open-set Pairwise Stress Classi cation Using Five Linear Features

1998

"... In PAGE 3: ... 3 for stress classi cation between neutral and loud for mean pitch information. Table1 shows detection results for all ve feature domains using the Bayesian hypothesis testing approach. gt;From Table 1, we can see that (1) pitch is the best feature for stress classi cation among the ve features, (2) error rates generally decrease as feature vector length increases, (3) there are performance di erences between di erent styles of stress, and (4) mean vowel formant lo- cations are not suitable for reliable stress classi cation.... In PAGE 3: ... Table 1 shows detection results for all ve feature domains using the Bayesian hypothesis testing approach. gt;From Table1 , we can see that (1) pitch is the best feature for stress classi cation among the ve features, (2) error rates generally decrease as feature vector length increases, (3) there are performance di erences between di erent styles of stress, and (4) mean vowel formant lo- cations are not suitable for reliable stress classi cation. 4.... ..."

Cited by 2

### Table 4. Overall results of Chi-Square Test of Independence testing the null hypothesis that the percentage of clustering results achieving the specified cluster recovery level does not differ across clustering methods.

in COMPARISON OF THREE CLUSTERING METHODS FOR DISSECTING TRAIT HETEROGENEITY IN SIMULTED GENOTYPIC DATA

2005

"... In PAGE 28: ... A chi-square test of independence was performed testing the null hypothesis that the number of clusterings achieving a certain ARIHA cutoff value was independent of the clustering method. The three methods performed significantly differently on each of the new ARIHA cutoff statistics ( Table4 ). The Bayesian Classification outperformed the other two methods.... In PAGE 32: ...001 for cross class entropy). Although significant, due to large sample sizes, as evaluated by Pearson correlation coefficient, the correlations with ARIHA were not particularly strong ( Table4 ). The strongest correlation was between average log of class strength and ARIHA (r=0.... ..."

### Table 5. MDL and Bayesian

2002

"... In PAGE 4: ...(7) from [3] [4]. The experimental results, as shown in Table5 , confirmed that the model selection using our Bayesian criterion re- sulted in better word recognition rates compared with that using the MDL criterion, especially in the case of small amounts of training data. Table 4.... ..."

Cited by 4

### Table 5. MDL and Bayesian

2002

"... In PAGE 4: ...(7) from [3] [4]. The experimental results, as shown in Table5 , confirmed that the model selection using our Bayesian criterion re- sulted in better word recognition rates compared with that using the MDL criterion, especially in the case of small amounts of training data. Table 4.... ..."

Cited by 4

### Table 4. Comparison of Bayesian active learning and Bayesian immediate learning on Proflle 83. Bayesian Bayesian

2003

"... In PAGE 6: ...g., Table4 ). This improvement is partly due to the proflle (term and term weight) learning algorithm, which also beneflts from the additional training data generated by the active learner.... ..."

Cited by 6