### Table 1: Reduction in delay uncertainty for different PEPV tolerance levels Number Total Average reduction of delay uncertainty Delay uncertainty

2001

"... In PAGE 3: ... In evaluating the benchmark circuits it is assumed that the original clock tree topology for the benchmark circuits is a balanced tree and that there are up to four branches leav- ing each branch point within the clock tree. The results of this analysis are listed in Table1 . It is shown that the delay uncertainty of the most critical data paths can be reduced by... ..."

Cited by 2

### Table 7.1: Most important features of the purposive modules. By \range quot; is meant the tolerable size of the uncertainty on the vehicle position.

### Table 1: Sources of uncertainties (Klein et al. (1994)) given by standard deviation (SD) and mean value as shown in Figure 2.

"... In PAGE 3: ... However, there are situations in which MCS can be comparably fast or FORM can be comparably accurate. Probabilistic analysis typically involves two areas of statistical variability as shown in Table1 . The first group consists of the uncontrollable uncertainties and tolerances.... ..."

Cited by 1

### Table 1. The microform geometry requirements of Rockwell diamond indenters and NIST expanded uncertainties (95 %)

"... In PAGE 4: ... Instrument Setup, Calibration and Check Standards, and Calibration Procedures The Rockwell diamond indenter is a diamond cone with 1208 of cone angle blended in a truly tangential manner with a spherical tip of 200 mm radius. The microform geometry and calibration requirements ac- cording to ISO and ASTM standards [2-4], are shown in Table1 . The working-grade indenters are used for the regular Rockwell hardness tests, while the calibration- grade indenters are reserved for calibrations of standard- ized hardness blocks.... In PAGE 12: ... The expanded uncertainty (95 %) is Um = 6tpum = 60.00858, less than 1/10 of the tolerance requirement for the calibration- grade Rockwell diamond indenters specified in ISO and ASTM standards ( Table1 ) [2,4]. 6.... In PAGE 13: ...0238 (95 %). This is less than 1/10 of the tolerance requirement for calibra- tion-grade diamond indenters ( Table1 ). This value may also be reduced further by improving the rotary stage alignment.... ..."

### Table 2. Stopping tolerance behavior for the interval Newton method with function x2 2.

2000

"... In PAGE 3: ... ) A domain tolerance condition corresponding to (1) is k = max 1 i n 8 lt; : w x(k) i max n x(k) i ; 1 o 9 = ; lt; (interval domain tolerance condition); (6) where w x(k) i represents the width of the i-th coordinate interval. A range toler- ance condition corresponding to (4) is 0 2 fi(x(k)) and w fi(x(k)) lt; ; 1 i n; or, equivalently, k = max 1 i n fi(x(k)) lt; (interval range tolerance condition): (7) Application of the interval Newton method (5) to Example 1, with starting bounds x(0) = [1; 2] gives Table2 . In Table 2, the library INTLIB [6] and Fortran 90 module INTERVAL ARITHMETIC [4] were used for simulated directed roundings; the intervals in the table were rounded outward for display.... In PAGE 4: ... (For an explanation of these terms and algorithms, see [5].) On the other hand, the interval Newton method may become stationary at some x(k) due to roundout error, as in Table2 , or due to uncertainty in the data3. In such cases, it is not appropriate to subdivide x(k) further.... ..."

Cited by 7

### Table 2: Summary of results for the fault tolerant photodiode APS for normal and electrically faulty pixels

"... In PAGE 6: ... Regression analysis was employed to find the slope of each set of data, which corresponds to the sensitivity of each pixel. Table2 and Table 3 summarize the sensitivity of each pixel and the sensitivity ratio. While the results from the optical fault tests have greater uncertainty, they confirm that pixels with optical faults behave similarly to pixels with electrical faults.... ..."

### Table 3: Summary of results for the fault tolerant photodiode APS for normal and optically faulty pixels

"... In PAGE 6: ... Regression analysis was employed to find the slope of each set of data, which corresponds to the sensitivity of each pixel. Table 2 and Table3 summarize the sensitivity of each pixel and the sensitivity ratio. While the results from the optical fault tests have greater uncertainty, they confirm that pixels with optical faults behave similarly to pixels with electrical faults.... ..."

### Table 1. Stopping tolerance behavior for the point Newton method with function x2 ? 2. the maximum attainable accuracy in a Newton iteration can be di cult to predict with oating-point iterations. In such instances, the domain and range tolerances will not terminate the algorithm, and behavior as in iterations 4 through 6 in Ta- ble 1 will occur, although the iteration may be more erratic for more complicated functions and with more than one variable, and the actual magnitudes of the values 2 k and k may not be predictable. Such uncertainties can be automatically handled with multidimensional interval Newton methods, as described in the texts on interval computations, such as [7], [1], [8], [3], [5]. In one dimension, interval Newton methods are of the form

2000

"... In PAGE 2: ... Since f0 ?x( ) = 2p2 is non-singular, the domain and range tolerances are roughly equivalent, and the iterations in Table 1 are obtained1. It is seen in Table1 that the domain tolerance (1) and the range tolerance (4) re ect actual error behavior well until the errors become small enough for roundo error to dominate the computation. For this example, a serious problem could occur only if were set to be on the order of the square of the machine epsilon, or were set to be on the order of the machine epsilon2.... ..."

Cited by 7

### Table 1 E ect of Problem Size on Convergence Rate Note: Each two-grid MGCG method is implemented twice for a sample problem: once on a small mesh of 17;289 unknowns and once on a larger mesh of 126;225 unknowns. Variants of MGCG di er from each other in the number of coarse-grid relaxation passes. nl=ns is a measure of independence of each method from mesh size. Iterative

"... In PAGE 16: ...2, we were able to run the two-grid MGCG methods on two di erent mesh discretizations of the same problem. Table1 shows the number of iterations, ns and nl, necessary for the various methods (including DCG and ICCG) to reach convergence on the small problem (17,289 unknowns) and on the \large quot; problem (126,225 unknowns), respectively. For a method truly independent of mesh size, the ratio of these two, nl=ns, would be 1.... ..."

### Table 3: Estimated vs. actual time-domain errors in a recorded head-motion sequence, 100 ms prediction interval achievable upper bound because we cannot put enough restrictions on the phase to prevent that from being a possibility. By using this procedure, a system designer could specify the maximum tolerable time-domain error, then determine the maxi- mum acceptable system delay that keeps errors below the specifi- cation. Unfortunately, the estimate is not a guaranteed upper bound because of uncertainties in the power spectrum (as men- tioned at the end of Section 3) and because the measured spectrum is an average of the entire signal, which may not represent what happens at a particular subsection of the signal. Therefore, how closely do the estimated maximum bounds match the actual peak errors, in practice?

1995

"... In PAGE 8: ... Unfortunately, the estimate is not a guaranteed upper bound because of uncertainties in the power spectrum (as men- tioned at the end of Section 3) and because the measured spectrum is an average of the entire signal, which may not represent what happens at a particular subsection of the signal. Therefore, how closely do the estimated maximum bounds match the actual peak errors, in practice? Table3 lists the estimated maximums against the actual for all six degrees of freedom in one recorded HMD motion sequence. The maximums are usually within a factor of two of each other, although for the Ty sequence the estimated peak is lower than the actual peak.... In PAGE 8: ...3 estimates the peak time-domain error, but a more useful measurement may be the average time-domain error. Note that the peaks in Table3 are much larger than the average errors. An expression to estimate the average error could be useful.... ..."

Cited by 34