### Table 6: Model selection based on AIC and fitting different volatility models to the detrended NASDAQ data

"... In PAGE 25: ... We also calculated the AIC from fitting an RCAR(1) to the data using Bayesian approach. Table6 contains the summary results which we used to select among the three proposed models. It is observed that the detrended data yt prefer RCAR(1) fit to either GARCH(1,2) fit or AR(1)- GARCH(1) fit based on AIC.... In PAGE 25: ...1.3 Forecasting From Table6 , it is seen that both RCAR(1) and AR(1)-GARCH(1,2) are compet- itive models for fitting the residual series. Therefore, we considered the forecasting problem based on fitting these two models.... ..."

### Table 1: Comparison of enumeration performances of MDL and AIC based SRP tests on the array covariance matrix and of DDDET. Various subarray sizes (p = 6; 7; 8) and SNR apos;s are looked at.

1994

Cited by 2

### Table 9. Results of model choice based on LR and AIC for underlying interaction fc = flogistic(x1 + x2). The number in brackets next to the percentage of identified complex interac- tion models indicates percentage of models with completely correct specification (i.e. including weight combination in the interaction term).

### Table 4: Model based CSDs for selected log{linear models of the ALUS data. ^ N is the estimated total number of patients to the nearest 50. The selected models display either one of the ten best AIC values, one of the ten best CSD distance values or both. Best AIC: model 1; best CSD distance: model 16. Values of \0 quot; are exact, values of \0.00 quot; are rounded. The covariate based log-CSD estimates are shown at the top of the table.

"... In PAGE 10: ... Here, we chose the distance function d = X Q U log ~ CQ ? log ^ CQ 2 : A better choice of distance could be motivated by a study of the distributional prop- erties of the ~ CQ. Table4 displays these distances for selected models. The AIC criterion favours the full independence model while the CSD distance criterion favours the model including all available two-way interactions except G:D.... In PAGE 10: ... Figure 2 shows, the greater speci city of CSD distance as compared with AIC with respect to the estimated total population, in the sense that aberrantly large population estimates are more easily identi ed using CSD distance than AIC. This phenomenon is also apparent in Table4 , (consider models 3, 5, 10 and 13), and should apply to small population values as well; there is of course more leeway in which to err above than below the true population value N. The greater speci city of the CSD distance suggests investigating it as a weight for model averaging as described in Buckland et al.... ..."

### Table 4. Factorial array for model selection with dynamic trajectory learning: value of AIC/BIC based on the number of trajectories (T) and the number of hidden neurons (H)

2002

"... In PAGE 4: ... 4.2 Three Classes: Ribosomal, Transcription and Secretion Gene Functional Classes Table4 provides computed statistical criteria for model selection: values of AIC/BIC with average of five independent runs based... ..."

Cited by 1

### Table 7. Results of model choice based on LR and AIC for underlying interaction fc = flogistic(2x1 +0.5x2). The number in brackets next to the percentage of identified complex interaction models indicates percentage of models with completely correct specification (i.e. including weight combination in the interaction term).

### Table 2: The experimentally determined proba- bility (percentage) on selecting an Ith order AR- model, based on 10,000 order estimations, given N samples of a 1st order AR-process. The order is es- timated using AIC and our method as a function of the probability on selecting a to high order (FAP).

### Table 2: R Fit Statistics for the Twenty Data Sets 2

"... In PAGE 21: ... Tables 2 and 3 present the results, where the model that explains the largest percentage of variance, respectively has the lowest AIC, is indicated in bold face type for each data set. [INSERT TABLES 2 AND 3 ABOUT HERE] Model selection based on the highest R fit statistic and the lowest AIC (in Table2 and... ..."

### Table 2: Results of model estimation using Multistart optimization (n = 1000 starting points). The columns are the total number of parameters in the three-tuple = (A; ; ), the number of di erent xed points identi ed dur- ing the estimation ( quot;I = 10?3), the number of times that the algorithm con- verged to a degenerate point ( quot;D = 10?200), the average number of iterations required to satisfy the stopping criterion ( quot;S = 10?6), the number of starting points that converged to the best xed point ^ , and the log-likelihood log L^ of the best xed point.

"... In PAGE 15: ... This is due to the fact that any parameter in A (or ) set to zero initially will remain at zero throughout the re-estimation procedure. Table2 presents some information on the estimation of the models 1 to 5. One can assume that the more complicated the model, the more di cult the estimation becomes.... In PAGE 20: ... As an aside, we can note that if the more stringent (with respect to additional parameters) BIC criteria were used, the three-state model 3 would have been the favored model. This is due to the fact that in a HMM the number of parameters increases considerably with each additional state (see Table2 ). We will take model 4 as our favored model based on the AIC values, despite the fact that the normality test still rejects the null hypothesis for this model.... ..."

### Table 1: Simulated ISAR(1) model: ordinary least squares estimates Constant Exponential Reciprocal

1995

"... In PAGE 14: ... 4i apos;s are sampled from a Gamma distribution with parameter = 2 and = 0:5 (giving the mean ticking frequency of 1) and i apos;s are sampled from a Normal distribution with mean 0 and variance 1. The ordinary least squares estimates together with the AIC values from three di erent functions are shown in Table1 . Our approach is successful in two aspects: First, the OLS estimates are all very close to the values we sampled from ( = 0:3): (i) constant function (^ = 0:28), (ii) exponential function ( ^ a = 0:31; ^ b = 0:23), (iii) reciprocal function ( ^ a = 0:29; ^ b = 0:30), Second, we are able to select the correct model based on AIC.... In PAGE 14: ... Our approach is successful in two aspects: First, the OLS estimates are all very close to the values we sampled from ( = 0:3): (i) constant function (^ = 0:28), (ii) exponential function ( ^ a = 0:31; ^ b = 0:23), (iii) reciprocal function ( ^ a = 0:29; ^ b = 0:30), Second, we are able to select the correct model based on AIC. More interesting results can be seen from Table1 . The estimated parameter (^ = 0:41) modeled by the constant function with data sampled from the exponential function is very closed to 0:42 which is value in (5) with = 2 and = 0:5.... ..."

Cited by 1