### Table 14: The ratio of the calibrated and updated means and std for the uniform, normal and inverse log normal marginal distributions for the partial correlation model.

2006

"... In PAGE 29: ...not update the covariance length Lc because Lv greatermuch Lc and the influence of specific value of Lc is very small. In Table14 we show the ratios for the partial correlation model, for uniform, normal and inverse log normal... ..."

### Table 2: Means and 95% probability intervals of the marginal posterior distributions of the param- eters.

"... In PAGE 12: ... An examination of time series plots of the sampled values of the parameters supported the presumption that the Markov chain had stabilized within the initial burn-in of 5000 draws. Table2 contains the means of the (marginal) posterior distributions of the parameters obtained from the simulation sample. All values are standardized so that 11 = 1.... ..."

### Table 2: Means and 95% probability intervals of the marginal posterior distributions of the param- eters.

"... In PAGE 12: ... An examination of time series plots of the sampled values of the parameters supported the presumption that the Markov chain had stabilized within the initial burn-in of 5000 draws. Table2 contains the means of the (marginal) posterior distributions of the parameters obtained from the simulation sample. All values are standardized so that 11 = 1.... ..."

### Table 1. Summary Statistics for the Marginal Posterior Distributions of the Parameters in the Study of the Additivity between Norfloxacin, Pefloxacin, and Theophyllinea

2003

"... In PAGE 5: ... The data with 20 isobols for each binary mixture, corresponding to 20 randomly-drawn parameter vectors are shown in Figure 2. Sum- mary statistics of the marginal posterior distribu- tion of each parameter are given in Table1 . Using these distributions, it is possible to assess the additivity or the nonadditivity of the effect of the drugs on convulsions for each mixture.... ..."

### Table 1: Means and variances of the marginal probability distributions of nodes, initially and after evidence.

2003

"... In PAGE 6: ... The first part of the code defines the mean vector and covariance matrix of the Bayesian network. Table1 shows the initial marginal probabilities of the nodes (no evidence) and the conditional probabilities of the nodes given each of the evidences {A = x1} and {A = x1, C = x3}. An examination of the results in Table 1 shows that the conditional means and variances are rational expressions, that is, ratios of polynomials in the parameters.... In PAGE 6: ... Table 1 shows the initial marginal probabilities of the nodes (no evidence) and the conditional probabilities of the nodes given each of the evidences {A = x1} and {A = x1, C = x3}. An examination of the results in Table1 shows that the conditional means and variances are rational expressions, that is, ratios of polynomials in the parameters. Note, for example, that for the case of evidence {A = x1, C = x3}, the polynomials are first- degree in p, q, a, b, x1, and x3, that is, in the mean and variance parameters and in the evidence variables, and second-degree in d, f, i.... ..."

Cited by 2

### Table 1: The marginal distributions related to the tree in Figure 1. Legend: Comp. = Theory of Com- putation; Prob. = Probability Theory; Wri. = Sci- entific Writing; Mat. = Maturity Test.

"... In PAGE 2: ... All courses in the tree are required for graduating, and the tree shows that students have completed different combinations of them. Table1 shows the marginal distributions whose entropies are embodied in the tree. We see that the two the- oretical courses are correlated quite strongly with each other and less strongly with the other courses.... ..."

### Table 2. Performance (F1 - harmonic mean of precision and recall) on the multimedia data - Average on 5 folds. ModelB is used on the marginal distributions. See text for more information.

"... In PAGE 7: ... As already mentioned, the text and image categorizers themselves are obtained beforehand on some independent data. Table2 presents the micro-averaged F1 measure (har- monic mean of precision and recall) we obtained on the text and image categories using various methods. Again, the baseline uses the scores of the independent categorizers without taking dependencies into account.... ..."

### Table 2: Selected quantiles of model parameters apos; marginal posterior distributions 5% 20% 30% 40% 50% 60% 70% 80% 95%

1997

"... In PAGE 11: ... The number of initial \warm-up quot; runs were chosen to be 500,000, and one set of samples of parameters were then taken in every 100 it- erations to avoid serial correlation. The distributions of the six parameters of the piecewise linear model are shown in Figures 3a to 3f, and Table2 shows selected quantiles of these distributions. The mode and the mean of the marginal posterior distribution of 1 are very close to 0, which further elaborated the mechanism hypothesized in the conceptual model.... ..."

Cited by 5

### Table 5: Estimated Marketing and Distribution Margins for Cocoa: the Case of Sub-district Ladongi, Southeast Sulawesi in January 1995

"... In PAGE 25: ...learly receiving a very high proportion of f.o.b. prices. Based on the estimate given in Table5 , the gross marketing margin accruing between the farmgate in Ladongi and Ujung Pandang in that area was 13 percent of the f.... In PAGE 30: ... Out of the total area covered by the program of 205,296 ha for the period 1990/91- 1993/94, 62,767 ha was cocoa. While cocoa covered more acreage than any other crops, this amounted to only 5 percent of total smallholder acreage on average during 1990/91- 1993/94 (see Table5 ). Mainly because of a large expansion in smallholder cocoa, this program was reduced drastically from FY94/95.... In PAGE 31: ...Table5 : P2WK Cocoa Program: FY90/91-FY93/94 (hectares) Province FY90/91 FY91/92 FY92/93 FY93/94 Total D.I.... ..."

### Table 1: Comparison of flat versus hierarchical bootstrap sampling on the KDD Cup data. Columns give measure- ments of mean-squared error (MSE) and KL divergence (KL Div) between the bootstrapped sample and the original test data for both the flat and hierarchically sampled bootstraps. The rows give the values for the marginal distribution of pa- tients, the marginal distribution of PEs, and the conditional distribution of PEs given patients. Note that while the hi- erarchical sample has larger MSE/KL than the flat sample for the marginals, it has smaller MSE/KL than flat for the conditional.

"... In PAGE 6: ... To understand the difference between the flat and hierar- chical bootstrap samples, we compared the distribution of patients and PEs in the bootstrap to that in the original testing data. Table1 gives the mean-squared error and KL divergence between the bootstrapped distribution and the original data distribution. The first two rows show the mea- sured values for the marginal distributions of the patients and PEs, respectively, in the flat and hierarchically boot- strapped samples.... ..."