### Table 1: Expected W mass precision from present and fu- ture data.

### Table 5. Which algorithm (if any) is sig- nificantly better at generalization to fu- ture data, given 300 seconds of compu- tation? This was measured by a paired t-test on the results of 20-fold cross-

2003

"... In PAGE 7: ... But does that matter? In some applications it is the ability of the learned model to generalize to likelihood estimation of future data drawn from the same distribution that counts, and do the gains in DagScore translate to gains in performance on such future data? This question is not so much a test of our algorithm, but of whether the structure scoring metric (in these tests, BDEU) is doing its job adequately. Table5 shows the re- sults of 20-fold cross-validation. On each fold the left-out data is unused until the DAG and the Bayes Net parame- ters have been constructed from the training set.... In PAGE 7: ... This procedure is applied to both Optimal Reinsertion and Hill climbing, which are each allowed 5 minutes of computa- tion. Table5 shows that frequently, Optimal Reinsertion of BDEU has a significant generalization advantage (ac- cording to a paired t-test) over hill climbing optimization of BDEU. 3.... ..."

Cited by 23

### Table 5. Which algorithm (if any) is sig- nificantly better at generalization to fu- ture data, given 300 seconds of compu- tation? This was measured by a paired t-test at the 5% level on the results of 20- fold cross-validation.

2003

"... In PAGE 7: ... But does that matter? In some applications it is the ability of the learned model to generalize to likelihood estimation of future data drawn from the same distribution that counts, and do the gains in DagScore translate to gains in performance on such future data? This question is not so much a test of our algorithm, but of whether the structure scoring metric (in these tests, BDEU) is doing its job adequately. Table5 shows the re- sults of 20-fold cross-validation. On each fold the left-out data is unused until the DAG and the Bayes Net parame- ters have been constructed from the training set.... In PAGE 7: ... This procedure is applied to both Optimal Reinsertion and Hill climbing, which are each allowed 5 minutes of computa- tion. Table5 shows that frequently, Optimal Reinsertion of BDEU has a significant generalization advantage (ac- cording to a paired t-test) over hill climbing optimization of BDEU. 3.... ..."

Cited by 23

### Table 1: Preliminary Results for the Olden Benchmarks

2002

"... In PAGE 9: ... In the fu- ture, we aim to test these techniques on other benchmarks, includ- ing the SPEC2000 benchmarks. Table1 shows the results for six of the Olden programs, and a simple matrix multiply routine function operating on three matri- ces. The table shows the compilation time for each benchmark, including the time for data structure analysis and pool allocation, but these times are nearly negligible because the benchmarks are quite small.... ..."

Cited by 23

### Table 1: Preliminary Results for the Olden Benchmarks

"... In PAGE 9: ... In the fu- ture, we aim to test these techniques on other benchmarks, includ- ing the SPEC2000 benchmarks. Table1 shows the results for six of the Olden programs, and a simple matrix multiply routine function operating on three matri- ces. The table shows the compilation time for each benchmark, including the time for data structure analysis and pool allocation, but these times are nearly negligible because the benchmarks are quite small.... ..."

### Table 1: Preliminary Results for the Olden Benchmarks

"... In PAGE 9: ... In the fu- ture, we aim to test these techniques on other benchmarks, includ- ing the SPEC2000 benchmarks. Table1 shows the results for six of the Olden programs, and a simple matrix multiply routine function operating on three matri- ces. The table shows the compilation time for each benchmark, including the time for data structure analysis and pool allocation, but these times are nearly negligible because the benchmarks are quite small.... ..."

### Table 1. Descriptive statistics collected from 100 samples run

"... In PAGE 4: ...eans is [36.7 60.5]. However, one must be aware that trajectory selected remains suboptimal, fu- ture extension of the algorithm is likely to take this factor into consideration. A detail descriptive statistics comparison of both the algorithms can be found in Table1... ..."

### Table 2. Parameters for Current and Future Baselines.

2003

"... In PAGE 7: ... The second is a more aggressive 6-wide ma- chine with a pipeline twice as deep and buffers four times as large as those ofthe current baseline, which we call the fu- ture baseline. Table2 gives the parameters for both base- lines. Note that both baselines include a stream-based hard- ware prefetcher [14].... ..."

Cited by 84

### Table 2. Parameters for Current and Future Baselines.

2003

"... In PAGE 7: ... The second is a more aggressive 6-wide ma- chine with a pipeline twice as deep and buffers four times as large as those of the current baseline, which we call the fu- ture baseline. Table2 gives the parameters for both base- lines. Note that both baselines include a stream-based hard- ware prefetcher [14].... ..."

Cited by 84

### Table 2. Parameters for Current and Future Baselines.

2003

"... In PAGE 7: ... The second is a more aggressive 6-wide ma- chine with a pipeline twice as deep and buffers four times as large as those of the current baseline, which we call the fu- ture baseline. Table2 gives the parameters for both base- lines. Note that both baselines include a stream-based hard- ware prefetcher [14].... ..."

Cited by 84