### Table 8 Error comparison of Localized vs non-Localized al- gorithm

"... In PAGE 21: ... We also conducted a similar experiment, where the value of H was set to 10 and we varied the ratio k1 kgl for di erent values of k and present the total SSE error in Figure 12. Table8 also presents the errors for the non-localized approach for the second experiment and compares them to the errors of the localized algorithm, when the ratio k1 kgl is set to 3. Two major observations can be drawn from Figures 11 and 12 and Table 8: 1.... ..."

### Table 1: A possible set of DFTM Algorithms. Local al- gorithms (L0 L2), re-mapping decision algorithms (M0 M3) and migration destination algorithms (D0 D4).

"... In PAGE 3: ... For a networked system consisting of battery powered devices, with high probabilities of failures in the interconnec- tion links and devices, the statistics that may be monitored in- clude: (1) Remaining battery capacity, (2) Link carrier sense errors, (3) Link collisions and (4) Node faults. Table1 provides an example instantiation of simple DFTM algorithms for a system with the aforementioned failure statis- tics monitored. Nodes must employ heuristics to ascertain these properties at their neighbors, based on locally measured values.... In PAGE 3: ... The M-class algorithms determine when to per- form such re-mapping. In the particular instantiations listed in Table1 , the M0 algorithm speci es to attempt to re-map an executing application when battery levels fall below a criti- cal threshold. The threshold associated with an M0 algorithm must be conservative enough to ensure that the re-mapping process completes before energy resources are completely ex- hausted.... In PAGE 3: ...3 D-Class algorithms The re-mapping destination algorithm strives to determine a node that an application should be re-mapped to. Of the par- ticular example D-class policies listed in Table1 , D1, D2 and D4, are well suited to situations in which links in a system fail and it is desirable that applications adapt around these fail- ures. The D3 algorithm is relevant to all systems with limited battery resources, while the D0 algorithm may be enabled in systems in which it is either impossible or prohibitively expen- sive to determine the best neighbor to re-map to.... ..."

### Table 1: Eye Region Localization error with different al- gorithms computed in terms of mean square distances from the manually set ground truths (as fractions of the whole image width). (ISO: isotropic, ARD: Automatic Relevance Determination, ONE: single variable parameterization)

"... In PAGE 4: ... 4.2 Eye Localization Results Table1 shows the results for the eye localization per- formance of GPR and SVM with different parameters. 1Obtaining ground truth data from children can be chal- lenging and will be attempted after the system will have... ..."

### Table 8.3. Average results of the experiments with support vector reduction al- gorithm applied to the classi er trained on the local features extracted from the KTH-INDECS database. The uncertainties are given as one standard deviation.

2005

### Table 2. Locality distances of Connected Dominating Set Al- gorithms for several approximation factors. The second column shows upper bounds for the 1+ approximation algorithm and the third column the bounds for the 3 + approximation algorithm presented in this paper. The fourth column shows upper bounds for the locality of the local 7:453 + algorithm for connected dom- inating set presented in [6].

"... In PAGE 19: ... This gives an upper bound for ( ). We calculate ( ) b (2 + d) + ( ) (3(d + 2)2 + 3(d + 2) + 1) (2 + d) + ((d 3) (d 1) (3 + ) 3 1) (3(6 + 4 + 2)2 + (6 + 4 + 2) + 1) (2 + 6 + 4) + ((d 3) (d 1) (3 + ) 3 1) 2 O 1 6 Table2 in Section 4 displays trade-offs between approximation ratios and locality distances which are attained by our algorithm and the algorithm presented in [6]. 3.... In PAGE 30: ... Table2 displays trade-offs between approximation ratios and locality distances which are attained by the algorithms for connected dominating set presented in this paper and the algorithm presented in [6]. 4.... ..."

### Table 3: Evaluation of the runtime of the 2-D FFT al- gorithm on di erent systems and processor types (image dimension 256 256).

1991

"... In PAGE 2: ... The local part of the 2-D FFT in algorithm (2) and (3) was computed us- ing the routine fourn out of [10]. The resulting runtimes for N = 256 are given in Table3 and Table 1 for dif- 2We thank Prof. Monien and the PC2 for the possibility to use the transputer system at the University of Paderborn.... In PAGE 3: ... On the one hand we may calculate a sequential 2-D FFT using the fourn routine on the biggest problem size that ts into the memory of a single processor. See Table3 for a summary of the results for di erent processor types.4 For all computations the compiler was instructed to place the stack and code in the internal (fast) memory of the transputers.... In PAGE 3: ... The measured speed of 0; 412 million ops (M ops) de nes the base in our calculations. If the number of oating point operations achieved by a single processing element is known, the sequential time may be calculated according to sequential time = 10 N2 2 2ldN M ops per pe : So any of the values out of Table3 may be used to de ne the sequential time and Fig.5 can be recalculated.... ..."

Cited by 6

### Table 3: Evaluation of the runtime of the 2-D FFT al- gorithm on di erent systems and processor types (image dimension 256 256).

1991

"... In PAGE 2: ... The local part of the 2-D FFT in algorithm (2) and (3) was computed us- ing the routine fourn out of [10]. The resulting runtimes for N = 256 are given in Table3 and Table 1 for dif- 2We thank Prof. Monien and the PC2 for the possibility to use the transputer system at the University of Paderborn.... In PAGE 3: ... On the one hand we may calculate a sequential 2-D FFT using the fourn routine on the biggest problem size that ts into the memory of a single processor. See Table3 for a summary of the results for di erent processor types.4 For all computations the compiler was instructed to place the stack and code in the internal (fast) memory of the transputers.... In PAGE 3: ... The measured speed of 0; 412 million ops (M ops) de nes the base in our calculations. If the number of oating point operations achieved by a single processing element is known, the sequential time may be calculated according to sequential time = 10 N2 2 2ldN M ops per pe : So any of the values out of Table3 may be used to de ne the sequential time and Fig.5 can be recalculated.... ..."

Cited by 6

### Table 1: Mean square error for neuron weights and stan- dard deviation for probability density estimates ( ) 6 Conclusions A new integrally distributed self-organizing learning al- gorithm for the class of neural networks introduced by Kohonen [1] was presented. The algorithm converges to an equiproblable topology preserving map for arbitrarily distributed input signals. It is shown that Kohonen apos;s al- gorthim converges to a locally a ne self-organizing map. Similations results agree with theoretical predictions.

"... In PAGE 5: ... As expected, the results of the three algo- rithms are fairly similar, for the case of a uniformly dis- tributed input signal. Table1 contains the mean square error for the neuron weights and the standard deviation of the probability density estimate vector, ^ p, for both simu- lations.It should be noted that the improvement in perfor- mance comes at an increase in computational cost.... ..."

### Table 2: Parameters to the Probabilistic Counting Al- gorithm

1996

"... In PAGE 8: ... The algorithm based on probabilistic counting esti- mates the size of the cube to within a theoretically pre- dicted bound. The values of the parameters we used are shown in Table2 . The estimate is accurate under widely varying data distributions, ranging from uniform to highly skewed.... ..."

Cited by 66

### Table 2: Parameters to the Probabilistic Counting Al- gorithm

1996

"... In PAGE 8: ... The algorithm based on probabilistic counting esti- mates the size of the cube to within a theoretically pre- dicted bound. The values of the parameters we used are shown in Table2 . The estimate is accurate under widely varying data distributions, ranging from uniform to highly skewed.... ..."

Cited by 66