### Table 2. Values of the Normalized Akaike Information Criterion

"... In PAGE 9: ... 4.2 Estimation results In Table2 , we present the values of the NAIC for the various dynamic speci cations of the homogeneity- and symmetry-constrained versions of the AID-System. Table 2.... ..."

### Table 4 Akaike Information Criterion Values for Markov Switching Models 2 regimes 3 regimes 4 regimes

"... In PAGE 12: ... 12 The Akaike Information Criterion (AIC) values for 2 to 4 regimes and 1 to 4 lags are shown in Table4 . The results show that the lowest AIC value corresponds to the Markov regime switching model with 3 regimes and 1 lag.... ..."

### Table 2. Clustering results after fitting various number of components (k) and for each dietary quality group comparisona

"... In PAGE 8: ... Hence, LOW is clearly different from HIGH and MED, which just differ in degree. Model-Based Clustering and ANOVA Results A summary of the mixture model fitting results is given in Table2 , where the values for the various model selection criteria (AIC, BIC, and LRT) are presented for each model ranging from one to five components. For all three comparisons, there was a dramatic decrease in both AIC and BIC, along with an increase in the log- likelihood when moving from a mixture model with one component to a model with two components.... ..."

### Table 2. Log likelihood values (logL; +45,000 for complete data and +10,000 for subset) and corresponding information criteria (AIC : Akaike Information Criterion, BIC : Schwarz Bayesian Information Criterion, HQC : Hannan and Quinn Criterion, and CAIC : Consistent AIC; -90,000 and -20,000, respectively) for analyses with different orders of polynomial fit (k) and variance function to model measurement error variances (figures in bold denoting best model identified by each criterion).

"... In PAGE 7: ....2.1. Order of fit Table2 summarises maximum log likelihood values and corresponding information criteria for phenotypic analyses. Likelihoods increased significantly with k (at 5% signicance level), even for k = 10 with 114 pa- rameters, i.... In PAGE 9: ....3.1. Subset of data Genetic analyses considering only a subset of the data were carried out fitting Legendre polynomials to order k = 4;6 and 8 for a RRM ignoring maternal genetic effects (Model G1), and k = 4 for a model including the latter (Model G2). Corresponding values for log L and information criteria are given in Table2 . Again, LRTs favoured the model with the highest number of parameters, while information criteria were minimum for orders of fit of k = 4 or 6.... In PAGE 10: ... In doing so, choices of k were guided by results from analyses carried out so far, and not all models were fitted for both data sets. Values for logL and corresponding information criteria for the different analyses are given in Table2 . As above, fitting maternal genetic effects for WOK did not increase logL significantly (k = 4464 vs.... ..."

### Table II. Log likelihood values (log L, C45 400 for Polled Hereford and C49 100 for Wokalup) and corresponding information criteria (AIC: Akaike Information Criterion, BIC: Bayesian Information Criterion, both 90 800 for Polled Herefords and 98 200 for Wokalups) for analyses with different orders of polynomial t (k, gures in bold denoting best model identi ed by each criterion). k.a/ v.b/ p.c/

### Table 3: Stepwise logistic regression results. Model size, actual prediction error Err, estimate of Err from model; 5 settings: null model, h=1 (a few strong e ects), h=2 (some moderate e ects), h=3 (many weak e ects), full model. Methods: true- uses actual Err; oracle- bootstrap samples from true model to estimate optimism; cic- covariance in ation criterion; aic- Akaike apos;s information criterion; cb- conditional bootstrap; cv- tenfold cross-validation. Numbers are averages over 30 simulations. Last 3 rows give Monte Carlo standard errors.

1993

"... In PAGE 17: ... As the yardstick, we used the prediction deviance Err over a large test sample rather than model error, as it makes more sense in this context, The column labelled \true quot; shows the performance of the actual minimum Err model in each case. The results are displayed Table3 and can be summarized as follows: For n = 50, AIC and the conditional bootstrap choose models that are too big, and su er a large increase in prediction error. For n = 150 they perform considerably better.... ..."

Cited by 95

### Table 1 Exact Maximum Likelihood Estimates of Fractional Integration Parameters for

1996

"... In PAGE 6: ...4. Hysteresis Test Results In Table1 we present estimates of the fra ctional integration parameter along with Wald statistics for testing the unit root null hypothesis. We sel ect the order (p,d,q) of the ARFIMA models through use of the Akaike Information Criterion (AIC).... ..."

### Table 4. Diagnostic Information on Hierarchical Linear Models.

"... In PAGE 12: ... In the previous section we established that that the log-scaled model best ts the data (log(E) = 0 + 1log(S) + 2C + R). In a similar manner we examine the in uence of domain on the regression coe cients ( 1:::3) in Table4 . This table contains the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC) [21] and the log-likelihood of each model.... In PAGE 12: ... This table contains the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC) [21] and the log-likelihood of each model. From Table4 we see that model log(E) = 0j + 1log(S) + 2C + R has the lowest AIC and therefore is the best balance between goodness-of- t and number of parameters. If we compare the likelihoods with the optimal regression model from the previous section, we obtain that the hierarchical linear model is... ..."

### Table 1: In-sample parameter estimates1 Model

"... In PAGE 11: ... Finally, we compute the log likelihood and, the Akaike Information Criterion [AIC] and Schwarz apos;s BIC in order to compare the three models. - insert Table1 about here - The in-sample estimation results are given in Table 1. From its bottom panel we see that the three models pass the residual correlation tests (the 5% critical value is 16.... In PAGE 11: ... Finally, we compute the log likelihood and, the Akaike Information Criterion [AIC] and Schwarz apos;s BIC in order to compare the three models. - insert Table 1 about here - The in-sample estimation results are given in Table1 . From its bottom panel we see that the three models pass the residual correlation tests (the 5% critical value is 16.... In PAGE 11: ... It is well-known that Schwarz apos;s criterion penalizes the inclusion of additional parameters rather severely, such that the improvement in t has to be substantial in order to be justi ed. The top panel of Table1 contains the parameter estimates and associated t-ratios. The parameter is not signi cant in the estimated GARCH model, which may be due to neglected asymmetry.... ..."