### Table 1. Error ratios obtained with the Bayesian method MPM. (a): Real parameters, (b): Parameters estimated from

"... In PAGE 4: ... In the classical HMC verifying (1), we assume that ) 1 ( 1 y p is Gamma and ) 2 ( 1 y p is Weibull (the parameters are estimated with SEM). According to the results presented in Table1 , we see that when data come from a PMC with Gaussian Copula model, the supervised and unsupervised Bayesian MPM methods based on this model can be much more interesting than the same supervised and unsupervised Bayesian MPM methods based on the classical HMC models. Of course, this is not surprising when the true parameters are used because of the very Bayesian theory; however, it remains when the parameters are estimated, which is interesting for real applications.... ..."

### Table 2: Comparison of observed possibility disjunctions with both possibilistic and probabilistic predictions *Guarantee that the difference between the model and the data is negligible. ** quot;undetermined effect quot;: negligibility cannot be claimed Conjunction. In the independent condition, the data fit was better with the possibilistic model than with the probabilistic one (see Table 3) because the difference between the data and the latter model was significant. In addition, Bayesian inference allowed us to claim negligibility of the difference between data and the possibilistic model, but not between data and the probabilistic model. Surprisingly, the best fit appeared with possibility measures (r(46) = .928; mean difference lt; 0.4%). This could be due to the fact that in the part of the scale where most conjunction ratings were (below 50), possibility measures are more relevant than necessity measures.

### Table 3. Mean and standard deviation for surprise and surprise at the outcome in the three versions.

"... In PAGE 6: ... An analysis of variance showed that the presence or absence of anticipations significantly influenced surprise and surprise at the outcome. Table3 shows mean and standard deviation for the two responses in the three versions. As we expected, M1 significantly generated more surprise (p lt;0,001) and surprise at the outcome (p lt;0,005) than the two other versions.... ..."

### Table 5. MDL and Bayesian

2002

"... In PAGE 4: ...(7) from [3] [4]. The experimental results, as shown in Table5 , confirmed that the model selection using our Bayesian criterion re- sulted in better word recognition rates compared with that using the MDL criterion, especially in the case of small amounts of training data. Table 4.... ..."

Cited by 4

### Table 5. MDL and Bayesian

2002

"... In PAGE 4: ...(7) from [3] [4]. The experimental results, as shown in Table5 , confirmed that the model selection using our Bayesian criterion re- sulted in better word recognition rates compared with that using the MDL criterion, especially in the case of small amounts of training data. Table 4.... ..."

Cited by 4

### Table 4. Comparison of Bayesian active learning and Bayesian immediate learning on Proflle 83. Bayesian Bayesian

2003

"... In PAGE 6: ...g., Table4 ). This improvement is partly due to the proflle (term and term weight) learning algorithm, which also beneflts from the additional training data generated by the active learner.... ..."

Cited by 6

### Table 4. Comparison of Bayesian active learning and Bayesian immediate learning on Profile 83. Bayesian Bayesian

2003

"... In PAGE 6: ...g., Table4 ). This improvement is partly due to the profile (term and term weight) learning algorithm, which also benefits from the additional training data generated by the active learner.... ..."

Cited by 6

### Table 4. Comparison of Bayesian active learning and Bayesian immediate learning on Proflle 83. Bayesian Bayesian

2003

"... In PAGE 6: ...g., Table4 ). This improvement is partly due to the proflle (term and term weight) learning algorithm, which also beneflts from the additional training data generated by the active learner.... ..."

Cited by 6