### Table 1: Comparison of the Dylan, CLOS, and L*LOOPS linearizations incompatibilities. Compatibility with CLOS was considered important both because of an existing body of Dylan code that assumed that linearization and because of the substantial amount of real experience with it in the Common Lisp community indicating that it could be used successfully.

Cited by 13

### Table 3.4 Additional work vectors required Our nal complexity criterion is the memory required. Table 3.4 lists the amount of memory in terms of vectors of length n required in addition to the matrix and the righthand side. We ignore memory requirements of order k or k2, since we assume that k is in general small compared to n. Note that it is possible for GBM to save a substantial amount of storage by replacing zi in Algorithm 8 by the expression

1995

### Table 1: left: Execution times and overheads for some demo applications; right: Creation times and overheads for objects in the bank demo application this relative overhead decreases when complexity of operation raises. In the case when an operation is enough complex to justify its remote processing, the instrumentation overhead seems to be acceptable. The overhead is low, and could be even lower in large applications, where substantial amounts of data are transfered.

1997

"... In PAGE 5: ... Selected results of a preliminary performance study for Orbix 1.3 running over a network of SUN SPARCstations 5 have been shown in Table1 a, 1b, and 2. The client, the server, and the monitor were all running on separate machines.... ..."

Cited by 1

### Table 4 describes the synthetic datasets 4 that were used for experimentation. Dataset 1 and 2 are very similar, dataset 2 has more noise and a weaker linear relationship as compared to dataset 1. Dataset 3 is larger, also with a substantial amount of noise. Dataset 5 is similar to dataset 4, but 0.1% of the data is perturbed to create outliers with large positive values.

in U.S.A.

2007

"... In PAGE 15: ... Table4 : Synthetic Datasets 8.1.... ..."

### Table 2: Execution times in seconds. One can see that our algorithm is more reliable than the Euclidean division algorithm. Generally speaking, there exist well-worked standard solvers to deal with linear systems of equations (even more powerful than the one used for our simulations). On the contrary, elementary polynomial operations can behave erratically. Moreover, linear system resolution is de nitively faster than elementary polynomial operations. Indeed, MATLAB is rather a matrix-oriented software whereas performing operations on polynomials usually requires a substantial amount of time-consuming nested loops.

1997

Cited by 2

### Table 1: a) Execution times and overheads for some demo applications; b) Creation times and overheads for objects in the bank demo application For operation invocations, the relative overhead de ned as a ratio of the instrumented to non- instrumented invocation time is particularly important. The results presented in Table2 show that this relative overhead decreases when complexity of operation raises. In the case when an operation is enough complex to justify its remote processing, the instrumentation overhead seems to be accep- table. The overhead is low, and could be even lower in large applications, where substantial amounts of data are transfered.

1997

"... In PAGE 6: ... The selected results of a preliminary performance study for Orbix 1.3 running over a network of SUN SPARCstations 5 have been shown in Table1 a, 1b, and 2. The client, the server, and the monitor were all running on separate machines.... ..."

Cited by 1

### (Table 3 to be placed approximately here) From the table we observe that all classes are present in the set of predicted carriers. Figure 8 illustrates the trade-off in selectivity. In the figure, two different parameter combinations have been used and the resulting sets are plotted onto the plane after multidimensional scaling. Although the heuristic discards a substantial amount of carriers during the process, the remaining haplotypes are the most likely carriers, and they are likely to show the most distinctive haplotype patterns of carrier haplotypes. They can be useful, for instance, for developing gene tests before the actual gene is recognized and can be tested directly. (Figure 8 to be placed approximately here)

2005

Cited by 1

### Table 2: Execution times in seconds. One can see that our algorithm is more reliable than the Euclidean division algorithm. Generally speaking, there exist well-worked standard solvers to deal with linear systems of equations (even more powerful than the one used for our simulations). On the contrary, elementary polynomial operations can behave erratically. Moreover, linear system reso- lution is de nitively faster than elementary polynomial operations. Indeed, Matlab is rather a matrix-oriented software whereas performing operations on polynomials usually requires a substantial amount of time-consuming nested loops.

### Table 11 reveals that a cache using the C/NA strategy has the potential to substantially reduce the amount of necessary bus bandwidth (by as much as 64% in the case of su2cor). Looking at this table and Tables 6 and 9, we also see that while the improved static scheme comes fairly close to the limit of bandwidth reduction, the dynamic schemes are not yet approaching the ideal.

1997

"... In PAGE 21: ... However, the table does show that in the optimal case a direct-mapped cache using C/NA has a hit rate that exceeds that of a 2-way set associative cache using LRU and approaches that of a 2-way set associative cache with optimal replacement or a 4-way set associative cache using standard LRU. Table11 shows the average number of bytes needed per data reference for four different cache configurations, while Table 12 contains the reduction in bandwidth requirements in per- centages compared to the optimal MR at the same level of associativity. The same cache param- eters as before were used, as well as the same bandwidth parameters (regular cache miss fetches 32 bytes, C/NA miss fetches 8).... In PAGE 23: ...rn Table11 : Average Number of Bytes/Reference in the Optimal Case ulululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululul Average Bytes per Reference, 8K byte Cache ulululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululul Direct Mapped Two-way SA Four-way SA Fully Associative ulululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululul Benchmark Optimal Optimal Optimal Optimal Optimal Optimal Optimal Program Standard CNA MR CNA MR CNA MR CNA ulululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululul ulululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululululul compress 6.5005 3.... ..."

Cited by 14

### Table 1: Quantitative results on sample distribution for KJS, as well as CSS and SS with different types of tails (scaled- Gaussian vs. HT, with and without optimization NO OPT vs. OPT). KJS finds 1024 minima in 1024 samples for the first trial and 1466 minima in 1536 samples for the second round. The CSS/SS experiments used 2000 samples. Note that KJS finds many more minima than SS and CSS, and that its samples are already very close to the final minima in cost, whereas SS and CSS samples require a substantial amount of optimization to become plausible hypotheses. Also note that CSS has significantly better performance than SS, both in terms of numbers of minima found and median costs of raw samples.

2003

"... In PAGE 6: ... figuration. Table1 summarizes the results, giving the number of min- ima found by each method, and also their median costs (like- lihoods relative to the true configuration) and their distances from the starting configuration in both spherical parameter space units and covariance-scaled standard deviations. It gives statistics both for raw samples, and for samples after local continuous optimization subject to joint and body non- self-intersection constraints.... ..."

Cited by 62