### Table 3: Composition of cone-shaped directions (lower case denotes Euclidean approximate inference). Directions so defined do not fulfill all the requirements, because they violate the associative property; for example

### Table 6: Monte Carlo mean and standard errors (in parentheses) of the indirect inference and approximate maximum likelihood estimators for various parameter values, N = 1000. The column Time reports the average time to convergence in seconds.

### TABLE 4. Alternative rooting and branching hypotheses with corresponding log-likelihoods, test statistics, and P-values inferred from the approximately unbiased (AU) and parametric bootstrap (PB) tests.

### Table 3: Inclusion inference.

"... In PAGE 4: ...meronym, partonym and membership, and an approximation of inclusion based on overlap computation and context analy- sis (see Table3 ). Contextual inclusion is computed by check- ing the overlap between the context feature (see Section 3.... ..."

### Table 1. Variational inference, TDP Gibbs, and collapsed Gibbs inference for the robot data of Section 5.2. Each Markov chain is run to convergence and 25 uncorrelated samples are collected.

2004

"... In PAGE 7: ... The covariance matrix is the sample covariance, and the mean of the hyperparameter is the sample mean. Table1 gives the results of approximate inference un- der the three algorithms which we are considering. In this case, both the collapsed Gibbs sampler and TDP Gibbs sampler require the same lag (20 iterations) to produce uncorrelated samples.... ..."

Cited by 10

### Table 1. Variational inference, TDP Gibbs, and collapsed Gibbs inference for the robot data of Section 5.2. Each Markov chain is run to convergence and 25 uncorrelated samples are collected.

2004

"... In PAGE 7: ... The covariance matrix is the sample covariance, and the mean of the hyperparameter is the sample mean. Table1 gives the results of approximate inference un- der the three algorithms which we are considering. In this case, both the collapsed Gibbs sampler and TDP Gibbs sampler require the same lag (20 iterations) to produce uncorrelated samples.... ..."

Cited by 10

### TABLE I. Strong phases and inferred values of sin 2#0B #5B2#5D from amplitudes in the factorization approximation with N

### Table 1. Variational, truncated DP Gibbs, and collapsed DP Gibbs inference in the robot data of Section 5.2. Each Markov chain is run to convergence and 25 uncorrelated samples are collected.

2004

"... In PAGE 7: ... The covariance matrix is the sample covariance, and the mean of the hyperparameter is the sample mean. Table1 gives the results of approximate inference un- der the three algorithms which we are considering. In this case, both the collapsed Gibbs sampler and trun- cated DP Gibbs sampler require the same lag (20 iter- ations) to produce uncorrelated samples.... ..."

Cited by 10

### Table 1. Distributions of k0 and k1 Conditional on the number of components (k0 and k1) for each Gibbs iteration, the distinct normal means and all other parameters were obtained. This allows for direct inference on the characteristics of each component of the mixture distribution (West and Cao (1993); Escobar and West (1995)). The approximate predictive noise density appears in Figure 2(a), illustrating the match with the observed noise sample 10

"... In PAGE 10: ... All deconvolution analyses were conditional on k0 and k1. For these priors and this data set, the induced prior probabilities are summarized in the columns labeled \Prior quot; in Table1 , for later comparison with the posteriors.... In PAGE 11: ...532, and so forth. The height of the third line in Figure 2(c) is very close to zero, corre- sponding to the posterior probabilities for k0 in the second column of Table1 . We note that, though the prior for the noise distribution was heavily in favor of a single normal distribution, the posterior probabilities strongly suggest two components; the map from prior to posterior for k0 dramatically indicates the data support for two components.... In PAGE 11: ... We note that, though the prior for the noise distribution was heavily in favor of a single normal distribution, the posterior probabilities strongly suggest two components; the map from prior to posterior for k0 dramatically indicates the data support for two components. For the signal distribution, a more typical picture emerges in comparison of columns three and four of Table1 . Though the prior for k1 is heavily concentrated at a single signal level, the posterior is dramatically di erent, supporting at least ve components, and most likely 5, 6 or 7.... ..."

### Table 2: The probability error in underestimated approximate reasoning for E in the network of Figure 1.

2003

"... In PAGE 10: ... We tested this idea by running the branch-and-boundalgorithm with a xed number of nodes. Table2 shows the mean relative error in inferences (each row is the mean of ten random networks). The relative error is computed using the approximate and the exact values for P(E = e0).... ..."

Cited by 6