### Table 1: Expense matrix for the decision model applied to a THC collapse. a1

"... In PAGE 8: ... Table 1 Fourth, we identify the effect of available information about the THC on the expected costs of each of the two policy choices. A number of conclusions are easily drawn from the estimated costs and losses shown in Table1 . First, for a prior belief, a46 , in an impossible THC collapse (a46a41a25a1a0 ), one would not choose the preservation strategy because the costs would exceed the zero losses.... ..."

### Table I. However, it was decided that, before constructing this relatively large and expensive (both to build and especially to test) structure, a smaller, l/4-scale model of the proposed structure should be designed, con- structed, and tested,

### Table 7.6: Empirically determined page invalidation thresholds for workload applications under faster NOW and NOW+HW architectural models. Results show that as latency falls and updates become less expensive relative to page transfers, invalidation thresholds tend to increase.

### Table 2a Composition of the Expense Ratio using Net Risk-Adjusted Returns (Sample of 1015 funds)

"... In PAGE 17: ... Initially, we examined the composition of the expense ratio for the entire sample of 1015 funds. Table2 a presents the results for the linear model using net risk-adjusted returns as the proxy for quality while Table 2b uses gross risk-adjusted returns as the quality proxy. As expected, fees are negatively related to the logarithm of size rather than to size.... In PAGE 17: ... Initially, we examined the composition of the expense ratio for the entire sample of 1015 funds. Table 2a presents the results for the linear model using net risk-adjusted returns as the proxy for quality while Table2 b uses gross risk-adjusted returns as the quality proxy. As expected, fees are negatively related to the logarithm of size rather than to size.... In PAGE 18: ... As predicted, the results in Tables 3a and 3b show large differences between positive and negative alphas. A comparison of the standard error of the linear alpha model (equation 5) using the overall sample of funds in Table2 a, which is .634, with the average standard error of the equivalent regressions in Tables 3a and 3b, which is .... In PAGE 18: ...501, shows a significant improvement in 9 explanatory power from differentiating the sample into positive and negative alpha funds. What is clear is that the results in Table2 a are dominated by the poor performing funds within the sample. The probable cause of this is the large variance of the MER apos;s for the funds with negative alphas relative to the variance for the funds with positive alphas as can be seen in Tables 6b and 6c.... ..."

### Table 2b Composition of the Expense Ratio using Gross Risk-Adjusted Returns (Sample of 1015 funds)

"... In PAGE 17: ... Initially, we examined the composition of the expense ratio for the entire sample of 1015 funds. Table2 a presents the results for the linear model using net risk-adjusted returns as the proxy for quality while Table 2b uses gross risk-adjusted returns as the quality proxy. As expected, fees are negatively related to the logarithm of size rather than to size.... In PAGE 17: ... Initially, we examined the composition of the expense ratio for the entire sample of 1015 funds. Table 2a presents the results for the linear model using net risk-adjusted returns as the proxy for quality while Table2 b uses gross risk-adjusted returns as the quality proxy. As expected, fees are negatively related to the logarithm of size rather than to size.... In PAGE 18: ... As predicted, the results in Tables 3a and 3b show large differences between positive and negative alphas. A comparison of the standard error of the linear alpha model (equation 5) using the overall sample of funds in Table2 a, which is .634, with the average standard error of the equivalent regressions in Tables 3a and 3b, which is .... In PAGE 18: ...501, shows a significant improvement in 9 explanatory power from differentiating the sample into positive and negative alpha funds. What is clear is that the results in Table2 a are dominated by the poor performing funds within the sample. The probable cause of this is the large variance of the MER apos;s for the funds with negative alphas relative to the variance for the funds with positive alphas as can be seen in Tables 6b and 6c.... ..."

### Table 9: Precision after 5, 10 or 20 retrieved documents (Okapi search model) Considering the expense of manual indexing, Table 9 shows that the enhancement is disappointing. When compared to the precision after 10 documents, manual indexing shows a precision of 54.8% as compared to 52.8% for the automatic approach. Strictly speaking, this comparison is correct. However, if an institution such as INIST decides to manually index

"... In PAGE 16: ..., 2001). In order to obtain a more precise picture within this context, in Table9 we reported precision results for 5, 10 or 20 documents retrieved using the Okapi probabilistic model. This table shows that the manual indexing scheme (labeled quot;MC amp; KW quot;) obviously results in better performance when compared to the automatic indexing approach (labeled quot;TI amp; AB quot;), relative to the precision achieved after 5, 10 or 20 documents.... In PAGE 19: ...records, improvements in mean average precision are only significant for the best three query expansion parameter settings. If we compute the precision after 5, 10 or 20 documents using the best query expansion setting, Table 11 shows how Rocchio apos;s blind query expansion improves precision compared to Table9 which shows corpus indexing using all sections or when the indexing is limited to the articles apos; title and abstract sections. However, for manual indexing, even if the mean average precision increases from 29.... In PAGE 19: ...recision increases from 29.56 to 33.33, the precision after 5 documents decreases from 59.2% ( Table9 ) to 58.4% (Table 11).... ..."

### Table 1: Three cost models Under cost model M1, a physical plan of P is a set of the view subgoals in P, and the cost measure is the number of subgoals in P. That is, the cost of a physical plan F is: costM1(F) = number of subgoals in F The main motivation of cost model M1 is to minimize the number of join operations, which tend to be expensive in practice, when a rewriting is evaluated. Under cost model M2, a physical plan F of rewriting P is a list g1;::: ;gn of the view subgoals in P. The views cor- responding to these subgoals are joined in the order listed. After joining the rst i subgoals in the list, the intermediate relation IRi is the join result with all attributes retained [11]. The cost measure for F under M2 is the sum of the sizes of the views joined, plus the sizes of the intermediate relations

2001

"... In PAGE 3: ...2 Efficiency of rewritings Let P be a rewriting of a query Q using views V. We de ne three cost models, as shown in Table1 . For each of them, we de ne a physical plan for P and a cost measure on... ..."

Cited by 14