### Table 1: complexities of point multiplication methods of Section 2

"... In PAGE 7: ...2.1 Point Multiplication for the Ec-key Exchange Table1 shows that methods A-3 and A-4 are the most expensive meth- ods, when the precomputation stage is executed during key exchange. Therefore only the best results for the sliding window and the wNAF based exponentiation are presented in the Tables 7 and 8 on page 8.... In PAGE 8: ...nd C2 C5. Method A-3 always needs D2 precomputed points. The op- timal choice for A-10 is: D2 BP BDBIBC BM CW BP BDBDBN DA BP BDBH for all systems, D2 BP BEBCBC BM CW BP BDBEBN DA BP BL for affine and CW BP BDBEBN DA BP BI for projective systems and D2 BP BFBCBC BM CW BP BDBEBN DA BP BH for affine and CW BP BDBCBN DA BP BH for projective systems. With the evaluation stage complexities of Table1 on page 4 and the given parameter DB with D2 BP BDBIBC both methods A-1 and A-2 need at least 144 DOUBLEs and 9 ADDs each, whereas A-3 and A-4 require no DOUBLEs at all and 53 and 14 ADDs, respectively. Therefore we only summarize the timings of the methods A-3 and A-4 in the Ta- bles 9 and 10.... In PAGE 9: ...99 5.08 Table1 0: run-times of LimLee-method in ms in affine representation. For A-3 the best result is achieved by using Jacobian Chudnowsky coordinates with precomputed affine points.... In PAGE 10: ...53 52.54 Table1 1: run-times of the simultaneous sliding window exponentiation in ms DB BDD0CTBDBH for all projective systems, D2 BP BFBCBC BM DBBD AK BDBH for BT, C8 and C2 and DBBD AK BDBG for C2 BV and C2 C5. Tables 11 and 12 show, that the best timing is achievedby the wNAF- based interleaving method with DBBD BP BDBH and DBBE BP BG for each D2.... In PAGE 11: ...22 45.04 - - - - Table1 2: run-times of the wNAF-based interleaving method in ms 1... ..."

### Table 2: complexities of point multiplication methods of Section 2.3

"... In PAGE 10: ... Tables 11 and 12 show, that the best timing is achievedby the wNAF- based interleaving method with DBBD BP BDBH and DBBE BP BG for each D2. This again confirms the theory (see Table2 ). The big difference between DBBD and DBBE can be explained as follows: In the methods B-4 and B-5 only the point doublings are done simul- taneously for each of the exponents, whereas the point additions are executed for each exponent separately (see [12]).... ..."

### Table 1. Sections of the Reconflgurable Multiple Operation Array

2005

"... In PAGE 4: ... Additionally, section 9 as part of the multiplier array, is used to add the last partial products; and section 10 receives the W(i) addend for MAC processing. Table1 summarizes the terms received by I1 in each one of the main sections through the 8 to 1 multiplexor. From the table is evident that we process numbers of two complement repre- sentation with sign extension, and signed magnitude numbers are processed like positive numbers, making the sign bit zero and updating the result with the XOR of multiplicand and multiplier signs.... ..."

Cited by 3

### Table 3: Analysis of Variance for the Multiple Linear Regression for Section 1

"... In PAGE 3: ...30 Total 13 674.82 Table3 : Partition of Sum of Squares of Regression Source DF SS Training 1 93.19 Average GPA 1 407.... In PAGE 4: ... The second unusual data point was for team 7: This design team did not receive the training, and performed poorly its high average team GPA. A similar analysis is done for the first half of the data (for section 1) as summarized in Table3 . In this case, only 19.... ..."

### Table 1. Potential loop-invariants for symmetric matrix multiplication for the specific partitioning in Section 3.

"... In PAGE 6: ... Alternative algorithms. Notice that by applying Steps 3- 8 to each of the potential loop-invariants in Table1 one will either conclude that the potential loop-invariant does not re- sult in an algorithm, or one will discover an alternative al- gorithm. We leave it as an exercise for the reader to derive... ..."

### Table 1. Detection results of 5-fold cross-validation. Method 1: Train one single model without parsing. Method 2: Train multiple models for the parsed sections.

### Table 5.5: Application kernels and the characteristics. 5.2.1 Matrix Multiplication This section evaluates matrix multiplication, which was described in Section 2.1.1. The natural unit of parallelism in matrix multiplication is the inner product. There- fore, the SF program creates n2 laments, which compute one inner product and terminate. The Filaments program uses the iterative laments kernel; the sequential code block is set to NULL, so the laments are only executed once. The sequential and baseline programs are very similar. The sequential program consists of three nested for loops. The SM baseline program creates one process per processor, which is coarse-grain. Each process computes inner products for a contiguous block of the result matrix in three nested for loops. The DM baseline program is an explicit message passing program, which distributes the input data

1996

Cited by 1