### Table 1. Bootstrap Tests with B Chosen by Pretest

"... In PAGE 8: ...min was normally 99, and was always .05. Because it is extremely expensive to evaluate the binomial distribution directly when B is large, we used the normal approximation to the binomial whenever B 10. Table1 shows results for four di erent values of . When = 0, so that the null hypothesis is true, B , the average number of bootstrap samples, is quite small.... ..."

### Table 1. Performance of Bootstrap Tests with B Chosen by Pretest

"... In PAGE 7: ... Since (3) is so simple that the bootstrap distribution is known analytically, we can evaluate the ideal bootstrap P value p (^ ) for each replication. Table1 shows results for four di erent values of . When = 0, so that the null hypothesis is true, B , the average number of bootstrap samples used by the procedure, is quite small.... In PAGE 9: ... But achieving this goal entails a penalty in terms of our criteria of computing cost, power loss, and the number and magnitude of con icts between the ideal and feasible bootstrap tests. To demonstrate these features of the A-B procedure, we performed a number of simulation experiments, comparable to those in Table1 . In these experiments, we tried several values of d and and found that d = :20 and = :05 seemed to provide reasonable results when = 0.... In PAGE 9: ... The results of our simulations are presented in Table 2. Comparing these results with those in Table1 shows that the A-B procedure performs much less well than the pretesting procedure. Either it achieves similar performance based on far more bootstrap samples (for example, for = 2, compare A-B with d = :10 and = :05 with any of the results in Table 1 except those with = :01), or else it achieves much worse performance based on a similar or larger number of bootstrap samples (for example, for = 1, compare A-B with d = :20 and = :10 with the other procedure with Bmax = 12; 799 and = :0001).... In PAGE 9: ... Comparing these results with those in Table 1 shows that the A-B procedure performs much less well than the pretesting procedure. Either it achieves similar performance based on far more bootstrap samples (for example, for = 2, compare A-B with d = :10 and = :05 with any of the results in Table1 except those with = :01), or else it achieves much worse performance based on a similar or larger number of bootstrap samples (for example, for = 1, compare A-B with d = :20 and = :10 with the other procedure with Bmax = 12; 799 and = :0001). Most of our results actually show the A-B procedure in an unrealistically good light, because the asymptotic P value used to determine B1 is correct.... ..."

### Table 2. Performance of Bootstrap Tests with B Chosen by A-B Procedure

"... In PAGE 9: ...easonable results when = 0. This therefore became our baseline case. In all the experiments, we set Bmin = 19 and Bmax = 12; 799. The results of our simulations are presented in Table2 . Comparing these results with those in Table 1 shows that the A-B procedure performs much less well than the pretesting procedure.... In PAGE 9: ... These errors cause B1 to be chosen poorly. As can be seen from Table2 , overrejection causes the A-B procedure to use more bootstrap samples than it should, and underrejection causes it to lose power and have more con icts, while only slightly reducing the average number of bootstrap samples. Note that multiplying ^ by any positive constant has absolutely no e ect on the performance of a bootstrap test with B xed or on a bootstrap test that uses our procedure to choose B.... ..."

### Table 1 One-class D2 test and MoG + Bootstrap test by using the whole note

2004

Cited by 2

### Table 7: Bootstrap multimodality tests with B = 1000 replications.

"... In PAGE 21: ... fail to reject the null hypothesis of m modes in the density whenever d ASLm is larger than standard levels of signi cance. By implementing the above test in each year, we have obtained the results shown in Table7 . In all years, we fail to reject unimodality.... ..."

### Table 2: List of various amounts of bootstrap data tested.

"... In PAGE 2: ... The test set consisted of 1,192 hand- labeled utterances (37 sessions). Table2 shows the boot data sizes for the various experiments. Finally, we divided the remaining unlabeled 32,745 DAs (235 sessions) into a 29,471 DA (209 sessions) set for unsupervised training, and a 3,274 DA (26 sessions) set, for cross-validation during training, as described later.... ..."

### Table 2: List of various amounts of bootstrap data tested.

"... In PAGE 2: ... The test set consisted of 1,192 hand- labeled utterances (37 sessions). Table2 shows the boot data sizes for the various experiments. Finally, we divided the remaining unlabeled 32,745 DAs (235 sessions) into a 29,471 DA (209 sessions) set for unsupervised training, and a 3,274 DA (26 sessions) set, for cross-validation during training, as described later.... ..."

### TABLE 2. Bootstrap Levels of Tests under Lognormal Sample

2000

Cited by 9

### TABLE II BOOTSTRAP LEVELS OF TESTS UNDER LOGNORMAL SAMPLE

2000

Cited by 9

### Table 7: GARCH(1,1) parameter estimates and P-values for the bootstrap tests. Series !

"... In PAGE 18: ...i erences in closing times, see e.g. Campbell et al. (1997). For several series, the parameter is not signi cant. Table7 reports the estimated parameters for the series and the bootstrap P{values for the four tests. The test were performed by replacing the yt considered in Sections 2 and 3 by the estimated quot;t in (5.... In PAGE 21: ... Furthermore, the 95-percent con dence intervals for the wavelet estimate # often include the value zero. Thus, as indicated by the analysis in Table7 , the observed long range dependence in asset prices volatility... ..."