### Table 1: Probabilistic ranking for the queries

2004

"... In PAGE 6: ...e., car, tank, and rocket) are shown in Table1 , as well as their rankings, computed by the mixture models. The first column in Table 1 indicates the query group and the model it comes from, the second column in- dicates the circular shift applied (i.... In PAGE 6: ...he query results for three of our models (i.e., car, tank, and rocket) are shown in Table 1, as well as their rankings, computed by the mixture models. The first column in Table1 indicates the query group and the model it comes from, the second column in- dicates the circular shift applied (i.e.... In PAGE 6: ... Fig. 3 shows the verification re- sults for the hypotheses listed in Table1 in the case of the rocket model. We received extremely small MSE errors in all of our experiments using artificial data sets.... In PAGE 6: ... We received extremely small MSE errors in all of our experiments using artificial data sets. Table1 shows that the hypotheses with the high- est probabilities were also the correct hypotheses in all cases except in one case (i.... ..."

Cited by 2

### Table 3. Comparison of performance on the Forest data set between one MLP, a standard mixture, and the hard probabilistic mixture proposed in this paper

"... In PAGE 12: ...1). The results are summarized in Table3 . For MLPs and standard mixtures, the iteration column indicates the number of training epochs, whereas for hard mixtures it is the number of outer loop iterations.... In PAGE 14: ... The first experiment uses the methodology already introduced and used with MLP experts, but with 20 SVM experts. Note that training time is much larger than with MLP experts ( Table3... ..."

### Table 4: Experimental results on dataset 1 Probabilistic

2005

"... In PAGE 6: ... Modeling per- sons and locations by separate models brings extra compu- tational costs to our approach, so we also run experiments to compare the performance with a simplified version of Prob- abilistic Model, in which the whole contents are modeled by one mixture of unigram model. Table4 illustrates the re- sults of the two approaches. The better performance of the full Probabilistic Model in- dicates the benefits of modeling named entities by separate models.... ..."

Cited by 6

### Table 1: Speaker identification errors for the Gaussian mixture model (GMM), the probabilistic latent semantic analysis model (PLSA) and the regularized probabilistic latent semantic analysis model (RPLSA). Test Data

2005

"... In PAGE 8: ... Specifically, three pieces of test speech from each speaker that have the lengths of 2, 3 or 5 seconds were used in each experiment. The results are shown in Table1 . Clearly, both PLSA and RPLSA are more effective than the GMM in all cases.... ..."

Cited by 3

### Table 5. Comparison of performance of the hard probabilistic mixture, for several setups, on the Forest data set with 100,000 training examples

"... In PAGE 14: ....3. SVM Experts Similar experiments were performed on the Forest database with the hard proba- bilistic mixture, but using SVMs plus logistic as probabilistic experts, rather than MLPs. Table5 shows the results obtained on the 100,000 examples training set, with different numbers of experts and different choices of gaters. The first experiment uses the methodology already introduced and used with MLP experts, but with 20 SVM experts.... In PAGE 15: ...ixture (Table 1, 37 min. in parallel). One explanation is that convergence is much slower, but we do not understand why. One clue is that when replacing the GMMs by a single MLP gaterd, (with the two other experiments in Table5 ), much faster convergence is obtained (down to 21 min.... ..."

### Table 1: Speaker identification errors for the Gaussian mixture model (GMM), the probabilistic latent semantic analysis model (PLSA) and the regularized probabilis- tic latent semantic analysis model (RPLSA). Test Data

2005

"... In PAGE 8: ... To compare the algorithms in a wide range we tried various lengths of test data. The results are shown in Table1 . Clearly, both PLSA and RPLSA are more effective than the GMM in all cases.... ..."

Cited by 3

### Table 4. The effect of dimensionality reduction for GMMs in the hard probabilistic mixture, on 400,000 examples with 40 experts, 50 hidden units for experts and 10 Gaussians for each P(X|E = i)

"... In PAGE 13: ...n any case). This is quick, and surprisingly, sufficient to obtain good results. Fi- nally, the hidden layer outputs of the MLP (for each input vector xt) are given as observations for the GMM. As shown in Table4 , it appears that the dimensionality reduction improves the generalization error, as well as the training error. The dimensionality reduction reduces capacity, but we suspect that the GMMs are so poor in high dimensional... ..."

### Table 3 The Mixture MCL algorithm, here using the third variant of dual MCL (see Table 2).

1999

"... In PAGE 28: ...ompare this graph to Figures 10 and 12. These results were obtained through simulation. sample with probability 1 ; using standard MCL, and with probability using a dual. Table3 states the Mixture-MCL algorithm, using the third variant for calculating importance factors in the dual. As is easy to be seen, the Mixture-MCL algorithm combines the MCL algorithms in Table 1 with the dual algorithm in Table 2, using the (probabilistic) mixing ratio .... ..."

Cited by 384

### Table 2. Average (over 100 mixtures) number of wrongly detected atoms for the linear sparse-decomposition method ( linear ) and three different methods measuring the local variance ( AV1 - AV3 ).

"... In PAGE 5: ...4. The results are shown in Table2 . The non-linear map- ping by the local variance formulas was able to extract the invariant and, therefore, to classify an atom as being present... ..."