### Table 4: Variance across individuals Response Amount Constant 0.027 0.058

2006

"... In PAGE 23: ... However, there is heterogeneity across individuals. In Table4 we present the posterior mean of the variance in the random effects for the various model variables, indicating the spread in effects across individuals. ---------------- Insert Table 4 ---------------- As the random effects may be highly dispersed, the story may be quite different for some individuals than for others.... ..."

### Table 1: Best fit parameter values. Unless otherwise stated, the parameter is a rate constant. Molecular amounts are measured by molecular number, and time is measured in minutes

in Optimal

### Table 5.1 shows durations measured from the Linux scheduler and the exten- sions that were made to it for event-driven frequency scaling. Elapses, respec- tively overheads, in this table are displayed both in CPU cycles and equivalent time. As processor frequency may be altered from 333 MHz up to 733 MHz constant amounts of cycles correspond to ranges in time. Vice versa, constant time values correspond to ranges in cycles.

### Table 2: Mapping of high level HPF operations to low-level communication types. Each of the three low-level operations is modeled as requiring time proportional to the amount of data communicated, with the constant of proportionality as shown.

1996

"... In PAGE 12: ... High-level operations in HPF give rise to one of these three types of low-level communication. Table2 shows the correspondance between high-level and low-level communication operations. In general, it is impossible to predict how varying the parameters, , , and , will affect the contraction operations.... ..."

Cited by 10

### Table 2: Amount Passed and Amount Received For All Participants and For Donors Only

"... In PAGE 9: ...Table2 ). Perhaps a better test of the constant percentage pass test is the amount passed considering only subjects who donated positive amounts.... ..."

### Table 1 lists b, k and total memory required by the new algorithm for practical values of and . The memory requirements for our old algorithm that knows N a priori [MRL98] are also listed along with. The new algorithm requires no more than twice the memory required by the old one. Figure 4 compares the memory requirements as N varies. The new algorithm requires a constant amount of space, no matter what the value of N is. The old algorithm can take advantage of the fact that sampling need not be carried out for small values of N and save on memory requirements.

1999

"... In PAGE 9: ...61 K 44.49 K Table1 : Values for number of bu ers b, size of each bu er k and total memory required by the new algorithm for di erent values of and . Also listed are memory requirements by our old algorithm that knows N a priori (N is assumed to be large enough to warrant sampling).... ..."

Cited by 73

### Table 1 lists b, k and total memory required by the new algorithm for practical values of and . The memory requirements for our old algorithm that knows N a priori [MRL98] are also listed along with. The new algorithm requires no more than twice the memory required by the old one. Figure 4 compares the memory requirements as N varies. The new algorithm requires a constant amount of space, no matter what the value of N is. The old algorithm can take advantage of the fact that sampling need not be carried out for small values of N and save on memory requirements.

1999

"... In PAGE 9: ...61 K 44.49 K Table1 : Values for number of bu ers b, size of each bu er k and total memory required by the new algorithm for di erent values of and . Also listed are memory requirements by our old algorithm that knows N a priori (N is assumed to be large enough to warrant sampling).... ..."

Cited by 73

### Table 1: Estimated slopes of log-log plots, correspond- ing to the order of the polynomial complexity. We measure computing time while increasing a single pa- rameter of the base case of 300 examples, 3 classes and 3 kernels. The range of values [min,max] on a log scale used were [100,5000] examples, [3,100] classes, and [2,20] kernels. The QCQP is also a constant amount slower for any particular dataset size. Examples Classes Kernels

2007

"... In PAGE 5: ... We optimize all methods to a rela- tive duality gap of 10 2. In Table1 time complexities are shown that are estimated from the slopes of log- log plots. The SILP is consistently faster and has a better scaling behaviour on the data compared to the QCQP.... ..."

Cited by 2

### Table 5: As the gasket is rotated in its plane by 10o the estimated pose changes by a similar amount. The plane position, given by ; and r remains fairly constant.

1992

"... In PAGE 10: ... constant. Table5 shows the good agreement between ac- tual and experimental estimation of relative motion. There are situations where the T matrix method is un- stable.... ..."

Cited by 14

### Table 1: Complexity of the algorithms. The data in table 1 is directly measured from the program. They are only accurate to the constant in front of the n. The exact amount of work space required may be slightly higher for GMRES and its variants, since these require some additional space to store the Hessenberg matrix.

"... In PAGE 6: ...e discussed in Section 3. We must pass work space to the iterative solvers. Here we will brie y discuss the work space requirements and the time com- plexities of the iterative solvers. Table1 lists the space and time complexities, where n is the dimension of the linear system. For both GMRES(m) and DQGMRES(k), m; k lt; lt; n.... ..."