### Table 1: Summary of CA function for each model, together with the parametric inference based on Maximum Likelihood or on the Bayes formula assuming a uniform prior for AB.

### Table 4: Results for paper assignment prediction for AAAI-1998 submissions (Basu et al., 2001). It shows the number of positives in top k, based on prior, expected performance (dis- crete/interpolated/parametric) with p = 0.100 and p = 0.001, and results as reported in the prior study along with the computed p-values (discrete/interpolation/parametric).

"... In PAGE 11: ..., 2001). Table4 shows the significance of the best and worst results reported, where the worst reported results for both P @10 and P @30 used the homepage as a profile for a reviewer and matched it against the abstract of the paper. For the best results, both again used the homepage as the profile of the reviewer which was then matched against either the keywords and title of the paper (P @10) or only the keywords (P @30).... ..."

### Table 4: Results for paper assignment prediction for AAAI-1998 submissions (Basu et al., 2001). It shows the number of positives in top k, based on prior, expected performance (dis- crete/interpolated/parametric) with p = 0.100 and p = 0.001, and results as reported in the prior study along with the computed p-values (discrete/interpolation/parametric).

"... In PAGE 11: ..., 2001). Table4 shows the significance of the best and worst results reported, where the worst reported results for both P @10 and P @30 used the homepage as a profile for a reviewer and matched it against the abstract of the paper. For the best results, both again used the homepage as the profile of the reviewer which was then matched against either the keywords and title of the paper (P @10) or only the keywords (P @30).... ..."

### Table 1: Common Parametric Correlation Forms Name (d; )

1997

"... In PAGE 7: ...ry and normally distributed. Then we can write as in, e.g., Diggle, Liang and Zeger (1994, p 87), y N( 1; ( )) (3) where = [ 2 2 ]0 and ( ) = 2I + 2H( ) with (H( ))ij = (dij; ) a valid parametric correlation function depending upon the distance between sites si and sj. Examples of standard parametric forms of (d; ) are given in Table1 where becomes a scalar capturing the rate of correlation decay. For the scallop data described in the introduction, we take the response Y (si) is log(total catch at si + 1) where the constant one is added to address the observed zero catches.... In PAGE 9: ... Note that rV would not exist if 2+ 2 2 gt; 20, but this would be unlikely in practice. For the asymptotically silled variograms given in Table1 , the relationship between the scalar correlation decay parameter and the ranges rC and rV are presented in Table 2. It is obvious that rV rC with equality if 2 = 0.... In PAGE 14: ... Hence, to complete the Bayesian model, speci cation of prior distributions for and is required. For the parametric models of Table1 , we assume the prior ( ; ) takes the form ( ; ) = 1( ) 2( 2) 3( 2) 4( ): Although the parameters ; 2; 2 and are not truly thought to be independent, the alternative, specifying a joint prior incorporating dependence, is arbitrary and di cult to justify. We prefer to let the data modify our independence assumption through the posterior.... In PAGE 21: ... 6.2 Fitted Semivariogram Models All of the parametric models of Table1 and nonparametric Bessel mixtures with di erent combinations of xed and random parameters were t to the 1993 scallop data. Figure 5 shows the posterior mean of each respective semivariogram while Table 3 provides the model choice criteria for each model along with the independence model ( ( ) = ( 2 + 2)I).... ..."

Cited by 17

### Table 2: For the data in Table 1, posterior summary for parametric model of

in and

"... In PAGE 7: ...ollowing, e.g., Hobert and Casella (1996). For the data in Table 1 we adopt the at prior assumption, presenting posterior summaries for the 1j and 2j in Table2 . Here, j = 1 denotes the event \two or... In PAGE 8: ... They are in accord with those for the pij in Table 3. ( Table2 here) Madruga et al. (1996) consider the inversion problem for a healthy older subject showing Y0 = (316; 801; 1310).... In PAGE 16: ... We compute them using 11 = 8, 12 = 5, 21 = 1 and 22 = 0.7 which are roughly the posterior point estimates from Table2 . Then the two sets of data are generated from Multinomial distributions with these probabilities and sample sizes n(1) i and n(2) i .... ..."

### Table 1: Speci ed prior moments for the constrained-parameter example

1995

"... In PAGE 14: ... We now discuss how we assigned a prior distribution to these parameters that matched speci ed prior moments while satisfying the constraint. For the purposes of this article, we label the parameters x1; x2; x3; x4, with the constraint P4 j=1 xj = 1: The information from the literature search was summarized as prior means and standard deviations on the logarithms of the parameters, as displayed in Table1 . (Speci cation in terms of the logarithms makes sense for the lognormal distributions of the other parameters in the model.... In PAGE 14: ... In practice, the prior variances are low enough that specifying the mean and coe cient of variation on the untransformed scale of would give virtually identical results.) We rst construct a parametric family for the prior distribution of x, given hyperparameters , and then determine by matching to the eight transformed moments given in Table1 , using the algorithm of Section 4.2.... In PAGE 15: ...is acceptable, since the numbers in Table1 are only approximations based on a literature review. The most familiar model for variables that sum to 1 is the Dirichlet.... In PAGE 15: ... In computing the logarithm of the Dirichlet density and its derivative, we must compute the log-gamma function and its derivative, which are fortunately easy to calculate numerically using standard computer programs. We start the iteration at the point = (48; 20; 7; 25), which roughly ts the means and standard deviations in the rst column of Table1 . We proceed with twenty steps of simulation and Newton-Raphson with N = 2000, followed by one simulation of N = 10000 and three Newton-Raphson steps.... In PAGE 15: ... For a comparison, we ran another simulation, starting at the point = (240; 100; 35; 125). In both simulations, the moments had reached approximate convergence, but not to the desired moments in Table1 . For example, the standard deviation of x1 in the best method of moments t is log(1:07), compared to the desired value of log(1:2).... In PAGE 16: ...analytic form of the distribution of x. We start the iteration with the rst four components of (the means of the components of ) set to the values in the rst column of Table1 and the second four components (the standard deviations) set to the values in the second column of Table 1. We then apply the algorithm of Section 4.... In PAGE 16: ...analytic form of the distribution of x. We start the iteration with the rst four components of (the means of the components of ) set to the values in the rst column of Table 1 and the second four components (the standard deviations) set to the values in the second column of Table1 . We then apply the algorithm of Section 4.... ..."

Cited by 1

### Table 1. Parametric Elements

2004

Cited by 3

### Table 2: Parametrizing Y.

2008

"... In PAGE 15: ... We tried our algorithm on examples which we constructed from the canonical surface (given by the binomial ideal with 20 generators) by a linear transformation of the projective space. The randomly generated matrix of the transformation has integral entries with the given maximal absolute value (the first column in Table2 ). We see that almost the whole time is spent for finding the Lie algebra of the surface.... ..."

### Table 1: Benchmark Parametrization

2005

"... In PAGE 32: ...86 3.18 Table1 0: Adaptive Expectations. Output, Investment and Consumption Statistics.... In PAGE 32: ...2539 0.1519 Table1 1: Adaptive Expectations. Correlation Structure.... In PAGE 32: ...85 3.61 Table1 2: Micro-Macro Expectations. Output, Investment and Consumption Statistics.... In PAGE 33: ...1009 0.3298 Table1 3: Micro-Macro Expectations. Correlation Structure.... In PAGE 33: ...676 0.547 Table1 4: Robustness of Simulation Results to Alternative Filtering Procedures. First Difierencing vs.... ..."