### Table 2 Percentages of Regressions by Mr. Chips

1997

"... In PAGE 11: ... Chips depends on his retinal structure and the nature of the saccade noise. Table2 shows sample percentages. The number of regressions rises when there is saccade noise or when scotomas are present.... ..."

Cited by 20

### Table 5: Results for Gaussian variation sources.

2007

"... In PAGE 6: ... We also compare n2SSTA with our implementation of [2] (denoted as linSSTA) by assuming Gaussian variations and linear delay model for both. From Table5 , we see that in predicting = , n2SSTA matches Monte Carlo simulation well with about 5.5% error, while linSSTA has about 11% error.... In PAGE 6: ... This clearly shows that n2SSTA is not only more general, but also more accurate than linSSTA. Note that n2SSTA has a larger error for Gaussian variation sources in Table5 than for uniform or triangle variation sources in Table 4, and this is because n2SSTA needs bigger bounds (10) for Gaussian variations than for uniform or triangle variations. Interestingly, we nd that both approaches pre- dict the 95% yield point well.... ..."

Cited by 1

### Table 1: Ranges of SUSY parameters used for independent variation in the study of the MSSM neutral Higgs boson searches.

1994

"... In PAGE 4: ... Throughout our paper, the top quark mass is xed to mt = 175 GeV. In order to study the e ect of the variation of the SUSY parameters described above we scan them in the ranges given in Table1 . The parameters shown in... In PAGE 4: ...Table 1: Ranges of SUSY parameters used for independent variation in the study of the MSSM neutral Higgs boson searches. Table1 are the input parameters for the calculations of the physical sfermion, chargino, and neutralino masses.... In PAGE 5: ...:5 tan 50. The lower bound is based on [16]. The variation of the upper bound has no signi cant e ect on the results. In our approach tan is a function of mh, mA and the SUSY parameters listed in Table1 . The lower bound on tan a ects the theoreti- cally allowed regions in the (mh,mA) and (mH,mA) planes.... In PAGE 6: ...Table1 . Circle-marked lines show the upper bound on mh obtained in the EPA for the same set of SUSY parameters (Table 2) as the cross-marked lines.... In PAGE 6: ... The other possible sources of uncertainties are connected with the increase of the range of the variation over the MSSM parameter space and with the higher order radiative corrections. We have checked that the change of the limits shown in Table1 has only a small e ect on the results discussed in next sections. A decrease of the lower limit for the sfermion and gaugino masses causes in most cases at least one SUSY particle to be light and observable.... In PAGE 6: ... Non-diagonal soft-breaking couplings are the source of sizeable avor-changing neutral currents (and, if complex, CP breaking e ects [17]), which are ruled out by existing data. Therefore, the choice of the bounds shown in Table1 is well motivated and results are found to be stable against small variations. Finally, some estimates are given of the 2-loop corrections for the MSSM Higgs boson masses [18].... In PAGE 7: ... In addition, a combined LEP1 limit on non-standard Z0 decays has been applied: ?max Z lt; 31 MeV at 95% CL [15]. A given (mh,mA) combination is excluded if for all SUSY parameter sets (from the ranges de ned in Table1 and for xed mt = 175 GeV) the expected number of events in at least one of the channels is excluded at 95% CL. Figure 2 shows regions in the (mh,mA) plane which are excluded by the individual channels listed above.... In PAGE 8: ... Above mA 100 GeV the bremsstrahlung production of h0 is su cient to establish, independent of the SUSY parameters, the Higgs mass bound of 55 GeV. Even in this range of mA, for special SUSY parameter combinations (outside the values de ned in Table1 ), a light h0 can escape detection for very large squark mixing. Such combinations, however, are unlikely from the theoretical and experimental point of view, as discussed in the Sec.... In PAGE 9: ... A linear interpolation has been used to obtain the sensitivity for mass combinations between simulated mass points. Four regions are distinguished in the (mh,mA) and (mA,tan ) planes: (A) The sensitivity region where, by direct searches, a Higgs signal cannot escape detec- tion, for any choice of the SUSY parameters from the ranges given in Table1 and for xed top quark mass of 175 GeV. (B) The region where the perspectives of direct searches depend on the SUSY parame- ters.... ..."

### Table 1. Characteristics of Items Required for X Manufacturer Production Process Quantity demanded

"... In PAGE 9: ...arehouse center (i.e., branch A) are 15, 8 and 24 kilometers, respectively. The volume size, price, and quantity demanded for these 50 items are detailed in Table1 , in which large coefficients of variation (CV) of the three attributes indicate remarkable heterogeneity of the items, which implies that clustering of items might be beneficial. Due to the confidential reason, some parameters of the cost models are assumed: per ordering administrative cost =10,000 dollars; unit inventory holding cost =10 dollars/cm3-year; unit transportation cost =100 dollars/truck-kilometer; unit distribution cost =50 dollars/truck-kilometer; truck capacity =3,000 cm3; annual interest rate =5%; coefficient of scale economy in warehousing=0.... ..."

### Table 7: Multi-Gaussian problem: Average error rates and their standard devi- ations for classi ers based on 100 selected examples (10% of the training set). Statistics based on 100 independent runs. Selective sampling method Average error rate Standard deviation

1999

"... In PAGE 29: ...atrix = 12I. The data is illustrated in Figure 21. The Bayes decision boundary can be approximated analytically, and the Bayes error is 20:8%, while the expected error of a nearest-neighbor classi er based on 1000 points is 29:9%. The learning curves of the various selective sampling methods are shown in Fig- ure 22 and the numerical results are summarized in Table7 . Again, it can be seen that the uncertainty and the maximal-distance selective sampling methods fail to detect some of the Gaussians, resulting in higher error rates.... ..."

Cited by 31

### Table 6: Two Gaussians problem: Average error rates and their standard deviations for classi ers based on 100 selected examples (10% of the training set). Statistics based on 100 independent runs. Selective sampling method Average error rate Standard deviation

1999

"... In PAGE 27: ... The Bayes decision boundary can be derived analytically, and the Bayes error is 18:2%, while the expected error of a nearest neighbor classi er based on 1000 randomly drawn training points is 18:2%. The average errors of a nearest-neighbor classi ers based on 10% of the training set selected by various selective sampling methods are summarized in Table6 . The learning curves of the various selective sampling methods are shown in Figure 20.... ..."

Cited by 31

### Table 4: Experiments for non-Gaussian variations and nonlinear delay.

2007

"... In PAGE 5: ... We compare the solution quality of n2SSTA with the golden Monte Carlo simulation of 100,000 runs. Similar to the experiment setting in [12], Table4 com- pares n2SSTA and Monte Carlo simulation in terms of the ratio between sigma and mean, the 95% yield timing, and runtime in second. In the rst (or second) set of experi- ments, all variation sources follow a uniform (or a tri-angle) distribution.... In PAGE 6: ... This clearly shows that n2SSTA is not only more general, but also more accurate than linSSTA. Note that n2SSTA has a larger error for Gaussian variation sources in Table 5 than for uniform or triangle variation sources in Table4 , and this is because n2SSTA needs bigger bounds (10) for Gaussian variations than for uniform or triangle variations. Interestingly, we nd that both approaches pre- dict the 95% yield point well.... ..."

Cited by 1

### Table 1: Average of CPU run-time estimates (in seconds) and relative standard deviation for each group of observations.

"... In PAGE 7: ... observation is independent of the others, and each group is handled independently of the others. The result is presented in Table1 . Each row contains the average (in seconds) and relative standard deviation, also called coe cient of variation, (in percent) for low load (87 groups) and high loads (338 groups) of observations, 1 row for each estimation technique.... In PAGE 7: ... The columns are given for di erent k. When setting up Table1 we have assumed that, for each k and estimation technique, the resulting CPU run-time values will vary according to a gaussian distribution. Since we have 87 and 338 groups to compute over (low load and high load, respectively), we may meaningfully interpret the average (given in seconds) and the standard deviation (given in percent of the average) of these estimates.... In PAGE 8: ...5% con dence. Using the data from Table1 , we may then decide how many observations are needed if we need to be certain that two programs with slightly dissimilar run-times have signi cantly distinct CPU run-times. This is tabulated in Table 2 for low load, high load, over all loads, and for varying group sizes.... ..."

### Table 5: Means and Standard Deviations of Independent Variables Variable Mean Std. Dev.

"... In PAGE 26: ...=== Insert Table 4 About Here === === Insert Table5 About Here === === Insert Table 6 About Here === RESULTS Mobility by destination We first establish a baseline set of results, and then show some variations. Table 7 shows Cox models of voluntary mobility within the county and out of the county.... ..."

### Table 1. Overall preferences (independent of image and bit-rate)

"... In PAGE 2: ... Beach image and VA ROI mask. Results Table1 shows the overall preferences, i.e.... In PAGE 2: ...able 1 shows the overall preferences, i.e., independent of (summed over) image and bit-rate, for standard JP2K and JP2K ROI coding with the ROI determined using the VA algorithm. Table1 also shows the standard errors asso- ciated with the preferences assuming a Gaussian approxi- mation to the Binomial distribution. From Table 1 it can be seen that standard JP2K is preferred over ROI coding ap- proximately 65% of the time.... In PAGE 2: ... Table 1 also shows the standard errors asso- ciated with the preferences assuming a Gaussian approxi- mation to the Binomial distribution. From Table1 it can be seen that standard JP2K is preferred over ROI coding ap- proximately 65% of the time. This shows that standard JP2K produces good quality images over a wide range of bit-rates and indicates that ROI coding may not be suitable as a general-purpose image coding technique.... ..."