### Table 3: Results for the Metropolis-Hastings algorithm Arma- Normal Arch

1999

Cited by 2

### Table 4.4: Metropolis-Hastings Algorithm.

### Table 2: Conditional Posterior Densities and Proposal Densities for i in the Implementation of Metropolis-Hastings Algorithms under MNLOG, MNS and MNEP Models (See Appendix for the Notation)

in Parametric Discrete Choice Models Based on the Scale Mixtures of Multivariate Normal Distributions

"... In PAGE 9: ... Chen and Dey (1998) give a comprehensive treatment. For completeness, we list the target and proposal densities used in the Metropolis-Hastings algorithms in Table2 . In the case of MNEP, to draw from the inverse normal distribution IN (4c0 quot;0 i ?1 quot;i)?12 ; 1 2 , we adopt the algorithm of Devroye (1986).... In PAGE 9: ... In the case of MNEP, to draw from the inverse normal distribution IN (4c0 quot;0 i ?1 quot;i)?12 ; 1 2 , we adopt the algorithm of Devroye (1986). (insert Table2 around here) 4. Model Comparison Now we consider the question of comparing competing discrete choice models.... ..."

### Table 1: Comparison between Algorithms of runtime, and quality of best graph found, for the 12 node example. MH-d(u) refers to the Metropolis- Hastings algorithm on decomposable (unrestricted) models, while SSS-d(u) refers to the shotgun stochastic search method on decomposable (unre- stricted) models.

2005

Cited by 13

### Table 1: Comparison between Algorithms of runtime, and quality of best graph found, for the 12 node example. MH-d(u) refers to the Metropolis- Hastings algorithm on decomposable (unrestricted) models, while SSS-d(u) refers to the shotgun stochastic search method on decomposable (unre- stricted) models.

2004

### Table 2: Posterior means and standard errors for the parameters of the structural models. M1 is the standard cash-in-advance model and M2 is the portfolio adjust- ment cost model. The moments are calculated from the output of the Metropolis Algorithm. The adjustment cost parameter does not enter model M1. The esti- mated simulation standard errors (see Appendix) for the posterior moments are less than one percent.

2002

"... In PAGE 23: ... Therefore, a random walk Metropolis algorithm, described in the Appendix, is used to generate parameter draws from the posterior distributions. Posterior means and standard errors are calculated from the output of the Metropolis algorithm and summarized in Table2 . For both models the posterior mean of the capital share parameter is about 0.... ..."

Cited by 2

### TABLE2 Estimated posterior means for the t prior using the importance-weighted Gibbs sampler and independence, random walk and rejection Metropolis chains. For the Metropolis chains, R is the proportion of candidates rejected by the Metroplois algorithm. For the rejection chain, F is the aver- age number of function evaluations per candidate step

1994

Cited by 491

### Table 2: Mean coalescence times in seconds for perfect Metropolis-Hastings

"... In PAGE 25: ... Atypical invocation of the program implementing the Metropolis-Hastings algorithm to generate a Poisson point process mightbe mh-cftp -i quot;poisson quot; -n m 2 -m m -z z -p p -s s; whichwould produce a point pattern over a square region S divided into n = m 2 square cells of side-length z, using random number seed s, the mean number of points in S being np. Table2 shows mean times #28in seconds, measured on a Sun UltraSparc#29 taken to attain perfect simulation using various seeds, working with various invocations of mh-cftp using a Strauss process with density proportional to #0D s r #28x#29 with reference to the Poisson process considered above. Here r =1:5is the interaction radius, #0D =0:5 is the interaction parameter, and s r #28x#29 denotes the number of pairs of points in x which are closer to each other than the interaction radius.... In PAGE 25: ...nteraction radius. Note that Eqs. #282.1,5.8#29 hold with K = p=z 2 . As can be seen in Table2 , there is weak evidence that e#0Eciency is maximized for p in the region of 0:4. However it should be noted that standard deviations based on 30 replicates were of the order of #060:3.... ..."

### Table 3.2: Updating Schemes of the MCMC algorithm for a Spatial Logistic Re- gression Model with Group Cluster Effects (MH = Metropolis Hastings step, RW = random walk, FC = full conditional)

2004

Cited by 2

### Table 2 E ciency of M{H and GSF relative to the Kalman lter Metropolis{Hastings Gaussian-sum lter

"... In PAGE 7: ... Since our baseline is the Kalman lter we measure the performance of the estimator by its relative e ciency , which is the MSD of the Kalman lter divided by the MSD of the estimator. The results are summarized in Table2 which shows as a function of t and , for both the GSF and the M{H algorithm. We observe that the results are almost identical.... ..."