### Table 2: Gibbs Sampler Results MCMC

1998

"... In PAGE 17: ...Fifth Stage: Hyperpriors As suggested by the exploratory analysis (see Figure 5), we assume that the MRF mean for the spatially varying autoregressive process ~a given by (35) has a large-scale linear spatial trend, decreasing from the northwest to the southeast. Thus, we assume that ~a0 has a simple spatial trend structure: a0(k; l) = a0[1] + a0[2] long(k) + a0[3] lat(l); (44) where a0[1]; a0[2]; a0[3] are independent Gaussian random variables: a0[1] Gau( ~ a0[1]; ~ 2 a0[1]) (45) a0[2] Gau( ~ a0[2]; ~ 2 a0[2]) (46) a0[3] Gau( ~ a0[3]; ~ 2 a0[3]); (47) with xed and speci ed parameters as in Table2 . Furthermore, we let the spatial dependence parameters a; a in (35) be independent Gaussian random variables, but constrain them to ensure positive-de niteness of the marginal covariance matrices: a Gau( ~ a; ~ 2 a) (48) a Gau( ~ a; ~ 2 a); (49) with xed and speci ed parameters as in Table 2.... In PAGE 17: ... Thus, we assume that ~a0 has a simple spatial trend structure: a0(k; l) = a0[1] + a0[2] long(k) + a0[3] lat(l); (44) where a0[1]; a0[2]; a0[3] are independent Gaussian random variables: a0[1] Gau( ~ a0[1]; ~ 2 a0[1]) (45) a0[2] Gau( ~ a0[2]; ~ 2 a0[2]) (46) a0[3] Gau( ~ a0[3]; ~ 2 a0[3]); (47) with xed and speci ed parameters as in Table 2. Furthermore, we let the spatial dependence parameters a; a in (35) be independent Gaussian random variables, but constrain them to ensure positive-de niteness of the marginal covariance matrices: a Gau( ~ a; ~ 2 a) (48) a Gau( ~ a; ~ 2 a); (49) with xed and speci ed parameters as in Table2 . Finally, we assume that the variance parameter 2 a in (35) is independent of a and a and use the inverse Gamma conjugate prior: 2 a IG(~ qa; ~ ra); (50) where the parameters are xed and suitably speci ed as in Table 2.... In PAGE 19: ...esearch (eg., see Gilks et al. 1996). We have taken a rather simple approach in our analysis. Initially, we ran three pilot simulations (4000 iterations each) with di erent starting values (one representative of prior means, and the others widely dispersed within each parameter apos;s prior distribution). In addition to a visual assessment of convergence, we examined the Gelman and Rubin (1992) convergence monitor ( ^ R, which should be close to one for convergence) for the model parameters listed in Table2 . Visual assessment of the three pilot simulations suggested that all parameters had \converged quot; by 1500 iterations.... In PAGE 19: ... Thus, we discarded the rst 1500 iterations and calculated the Gelman and Rubin monitor with the remaining 2500 iterates. As indicated in Table2 ,p ^ R values for all parameters suggested convergence at 500 iterations (beyond the 1500 burn-in), with monitor values below 1.03 in all cases.... ..."

Cited by 17

### Table 4: Comparison of Gibbs samplers. See Table 3.

"... In PAGE 6: ...able 4: Comparison of Gibbs samplers. See Table 3. per parameter. Using again the tests described in Section 6 we found that WEGibbs does a significantly better job than WGibbs at detecting the implanted motifs (see Table4 and Figure 3). The next logical step is to ask whether a Gibbs analogue of Conspv that would more intimately link the E-values to the optimization procedure can further improve these results.... In PAGE 7: ... As in the original Gibbs sampler, if no improvement to the entropy score is detected in a specified number of iterations (-L) the program starts a new run. Table4 and Figure 3 confirm that by incorporating the E-values into its sampling strategy Gibbspv is better at detecting the implanted motifs in our experiments. In addition, as can be seen from the results in Table 5, Gibbspv can be substantially better than existing algorithms such as MEME [1] and GLAM [3] for finding motifs with unknown width.... ..."

### TABLE I GIBBS SAMPLER FOR SOURCE SEPARATION OF STUDENT t SOURCES.

2005

Cited by 6

### Table 1: Average held-out log probability for the predictive distributions given by vari- ational inference, TDP Gibbs sampling, and the collapsed Gibbs sampler.

"... In PAGE 17: ...amples. This is illustrated in the autocorrelation plots of Figure 5. Comparing the two MCMC algorithms, we found no advantage to the truncated approximation. Table1 illustrates the average log likelihood assigned to the held-out data by the approximate predictive distributions. First, notice that the collapsed DP Gibbs sam- pler assigned the same likelihood as the posterior from the TDP Gibbs sampler|an indication of the quality of a TDP for approximating a DP.... ..."

### Table 4: Gibbs Sampler timings using the 4th order model and varying G #28execution time

in Scalable Data Parallel Algorithms for Texture Synthesis and Compression using Gibbs Random Fields

1993

"... In PAGE 14: ... Table 3 presents the timings on the CM-2 for a Gibbs sampler with #0Cxed model order 4, but varies the number of graylevels, G. Table4 gives the corresponding timings on the CM-5. 3.... ..."

### Table 3: Gibbs Sampler timings using the 4th order model and varying G #28execution time

in Scalable Data Parallel Algorithms for Texture Synthesis and Compression using Gibbs Random Fields

1993

"... In PAGE 14: ... Table 1 shows the timings of a binary Gibbs sampler for model orders 1, 2, and 4, on the CM-2, and Table 2 shows the corresponding timings for the CM-5. Table3 presents the timings on the CM-2 for a Gibbs sampler with #0Cxed model order 4, but varies the number of graylevels, G.Table 4 gives the corresponding timings on the CM-5.... ..."

### Table 7: Approximate Starting Points and Convergence Criterion of Gibbs Sampler for PVA Data.

1997

Cited by 1

### Table 9: Starting Points and Convergence Criterion of Gibbs Sampler for Orange Tree Data.

1997

Cited by 1