### Table 6. Test data likelihoods for the person-

"... In PAGE 6: ... Non-PS maxent outperforms Markov model for all prediction list lengths, but 4 in 3-component mixture model, and it performs worse only for N =3, 4, 5 in the 10-component mixture model. In Table6 , we report the likelihood of the personalized models for the test data. Best likelihood is achieved by Markov mixture model and non-PS maxent mixture follows it.... ..."

### Table 2. Resulting Log Likelihoods on Synthetic Data

2003

"... In PAGE 5: ... Havingapplied both HCHC and the SEM algorithm to the synthetic datasets, we now document the results. The average log likelihood at convergence is shown in Table2 . It is apparent that the log likelihoods of the DBNs constructed with the SEM method are much higher than those resulting from HCHC.... In PAGE 6: ... The SEM appears to have performed less well with consistently higher SD and smaller percentages of correct links found. This implies that the high scores in Table2 are due to spurious correla- tions causing high SD. Upon investigating the segmentation from SEM, it was Table 3.... ..."

Cited by 2

### Table 1: Negative log likelihood of test data

"... In PAGE 10: ...ame linear transformation. Although most work in this eld models log returns (i.e. log z t ; log z t;1 ), we found that this gaveworse results for the ARIMA models (some of which failed to converge) so we modelled raw prices throughout. Table1 contains the generalisation performance of each model tested. We had little di culty with training any of the models with the exception of the Bayesian treatment of input dependent noise.... ..."

### Table 3 : Confusion Matrices for the classification of the test data by the Maximum Likelihood Classification.

1998

Cited by 1

### Table 6 : Confusion Matrices for the classification of the test data by the Maximum Likelihood Classification .

1998

Cited by 1

### Table 2: Maximum likelihood results for the superalloy data

"... In PAGE 8: ...where L i ( )= i flog[ (z i )] ; log[ (x i )y i )]g +(1; i )log[1; (z i )]: The ML estimate ^ of is the set of parameter values that maximizes L( )orL( ). Table2 gives the ML estimates of all model parameters resulting from tting the fatigue- limit model to the data. Figure 1 shows curves of the ML estimates of the 5, 50 and 95 percentiles of fatigue life.... In PAGE 9: ...stimators can also be computed. See pp. 292-297 of [2]. However, as mentioned earlier likelihood con dence intervals perform better in the sense that coverage probabilities are closer to nominal con dence levels than those of normal approximation intervals. The con dence intervalsin Table2 indicate that the parameters [ ] 1 , [ ] 1 and are statistically signi cantly di erent from zero. These intervals indicate that there is a rela- tionship between mean fatigue life and the stress level.... ..."

### Table IX. Logmarginal likelihood difference real data

2005

Cited by 6

### Table 1: Various training log likelihoods on the data set for the digit 2

"... In PAGE 7: ... the second last 3 in Figure 1 (b) and (c)). Table1 contains the data set log likelihoods and average data case log likelihoods on the training set for the digit 2 . The results are similar on the training set for the digit 3 and are therefore omitted.... ..."

### Table 4: Transformation Results for MACS data: Maximum Log-likelihoods

"... In PAGE 10: ... We wish to compare the log, square root, cube root, fourth root and untransformed CD4, to investigate whether there is any particular transformation scale for analysis that gives a better #0Ct to the data and better predictions for the MACS data. We #0Ct the LRE, IOU, BM and QRE models to each set of transformed data and the maximized log-likelihood values are given in Table4 . The QRE model was included here for completeness even though we don apos;t consider it sensible because of its poor e#0Eciency properties.... ..."

### Table III. Data Fitting to the Weibull Distribution Likelihood a b

2006

Cited by 5