### Table 5: Robustness: Relative degradation in F results for Chunking. For each parser the re- sult shown is the ratio between the result on the noisy SWB data and the clean WSJ corpus data.

2001

"... In PAGE 5: ...76 87.47 This is shown more clearly in Table5 which compares the relative degradation in performance each of the parsers suffers when moving from the WSJ to the SWB data (Table 2 vs. Table 4).... ..."

Cited by 24

### Table 1. Relative predictive performance of the robust and non-robust estimators. Mean of 25 Model Errors (G:

"... In PAGE 7: ... Four underlying functions used in the Monte Carlo experiment. We use the four test functions de ned in Table1 of Donoho and Johnstone1: blocks, bumps, heavisine and Doppler. We normalize them such that their \standard deviation quot; is equal to 7, Z 1 0 (f(x) ? f)2dx = 49; where f = Z 1 0 f(x)dx: The four functions are plotted in Figure 2.... In PAGE 8: ...Based on Table1 , we see that the simulation results can be separated into two groups: the predictive performance of the robust procedure is poor for bumps and blocks, and good for heavisine and Doppler. This comes of no surprise... ..."

### Table 4. Relative delay and relative robustness for various filters (permissible vibration error 0.1%). Only the best filters in each case are shown.

### Table 1:Summaryof GLS versesreactivetabusearch and robust tabu searc h:mean % relative

### Table 2. Comparison of various conventional filters with respect to the relative delay, and relative robustness, when the total number filter coefficients is limited up to 8. Only the best filters in each case are shown.

"... In PAGE 9: ... The results of this comparison are presented in the Table 3. Compared to the results of the previous Table2 , the three FIR filters with the smallest time delays have now almost reached their best performance. Table 3.... ..."

### Table 3. Comparison of various conventional filters with respect to the relative delay, and relative robustness, when the total number filter coefficients is limited up to 16. Only the best filters in each case are shown.

### Table 4. Results for scene geometry invari- ance evaluation. A low value for the variation of invariants ^ H, ^ C, or ^ W relative to ^ E indicate robustness against scene geometry. Note

### Table 1. Comparingformulationsonintegraldosedeliveredtophantomandintegraldosedelivered to normal tissue under the realized pmf. The relative amount of dose delivered (%) by the various formulations is normalized to the margin formulation. Nominal Robust Margin

2006

"... In PAGE 11: ... As you can see in figure 10, the intensity map (beamlet weights) corresponding to the robust solution is not overly complex. Table1 shows the numerical results corresponding to the previous figures. We can see that the robust solution delivers 8.... ..."

Cited by 1

### Table 1.1 LAPACK codes for computing eigenpairs of a symmetric tridiagonal matrix of dimension n, see also [2, 4]. Note that in addition to computing all eigenpairs, inverse iteration and the MRRR algorithm (MRRR = Multiple Relatively Robust Representations) also allow the computation of eigenpair subsets at reduced cost.

### Table 6: Robustness of tournament sheets.

2005

"... In PAGE 16: ... That is, we will use one probability model to create a contrarian sheet, and see how it fares in ROI simulations performed under another model. Table6 gives the results, where rows correspond to true probability models and columns to the model used to derive the \optimal sheet. quot; This table can be read two difierent ways.... In PAGE 20: ... Table 10 contains the average ROI for this approach in its \predicted champion-only contrarian quot; columns. Comparing these results to Table6 gives an indication of how our predicted champion-only contrarian al- gorithm performs relative to the case where we know exactly how our opponents bet. In 58% of the cases, our algorithm obtained the same sheet as when we assume this knowledge, and therefore delivered an equal simulated ROI.... ..."