### Table 1. Steps of parallel computation

1995

"... In PAGE 7: ... Finally the computation of the slave portion xs corresponding to an eigenvector u of the slave problem can be done in parallel as well (Kssj ? p( u)Mssj)(xs)j = ?(Ksmj ? p( u)Msmj) u; j = 1; ; r : 4 Substructuring and parallel processes To each substructure we attach one process named `Sj apos; and with the master infor- mation we associate one further process called `Ma apos;. These processes work in parallel as shown in Table1 . For the necessary communication each `Sj apos; is connected to `Ma apos; directly or indirectly.... In PAGE 7: ... A detailed description is contained in [14]. Table1 shows how this parallel eigensolver which consists of the processes `Ma apos; and `R1 apos;,.... In PAGE 11: ... 6 Numerical results The parallel concept was tested on a distributed memory PARSYTEC transputer system equipped with T800 INMOS transputers (25MHz, 4 MB RAM) under the distributed operating system `helios apos;. Since each processor has a multiprocessing capability we were able to execute more than one process from Table1 on every processor which turned out to be extremly important for a good load balancing of the system. We do not discuss the mapping of the process topology to the processor network.... In PAGE 13: ... Table 4). For the parallel solution of the matrix eigenvalue problem via condensation and improvement using the Rayleigh functional according to Table1 we proposed in [16] the following proceeding.... ..."

Cited by 10

### Table 1. Steps of parallel computation

1995

"... In PAGE 7: ... Finally the computation of the slave portion xs corresponding to an eigenvector u of the slave problem can be done in parallel as well (Kssj ? p( u)Mssj)(xs)j = ?(Ksmj ? p( u)Msmj) u; j = 1; ; r : 4 Substructuring and parallel processes To each substructure we attach one process named `Sj apos; and with the master infor- mation we associate one further process called `Ma apos;. These processes work in parallel as shown in Table1 . For the necessary communication each `Sj apos; is connected to `Ma apos; directly or indirectly.... In PAGE 7: ... A detailed description is contained in [14]. Table1 shows how this parallel eigensolver which consists of the processes `Ma apos; and `R1 apos;,.... In PAGE 11: ... 6 Numerical results The parallel concept was tested on a distributed memory PARSYTEC transputer system equipped with T800 INMOS transputers (25MHz, 4 MB RAM) under the distributed operating system `helios apos;. Since each processor has a multiprocessing capability we were able to execute more than one process from Table1 on every processor which turned out to be extremly important for a good load balancing of the system. We do not discuss the mapping of the process topology to the processor network.... In PAGE 13: ... Table 4). For the parallel solution of the matrix eigenvalue problem via condensation and improvement using the Rayleigh functional according to Table1 we proposed in [16] the following proceeding.... ..."

Cited by 10

### Table 3: Highly Parallel Computing

1994

"... In PAGE 4: ... (If you choose to do Gaussian Elimination, partial pivoting must be used.) Compute and report a residual for the accuracy of solution as jjAx ; bjj=(jjAjjjjxjj): The columns in Table3 are de ned as follows: R max the performance in G op/s for the largest problem run on a machine. N max the size of the largest problem run on a machine.... In PAGE 59: ...58 accurate as that from Gaussian Elimination. The columns in Table3 are de ned as follows: R max the performance in G op/s for the largest problem run on a machine. N max the size of the largest problem run on a machine.... ..."

Cited by 321

### Table 3: Highly Parallel Computing

1994

"... In PAGE 4: ... #28If you choose to do Gaussian Elimination, partial pivoting must be used.#29 Compute and report a residual for the accuracy of solution as jjAx , bjj=#28jjAjjjjxjj#29: The columns in Table3 are de#0Cned as follows: R max the performance in G#0Dop#2Fs for the largest problem run on a machine. N max the size of the largest problem run on a machine.... In PAGE 48: ... The results obtained using Strassen Algorithm are as accurate as that from Gaussian Elimination. The columns in Table3 are de#0Cned as follows: R max the performance in G#0Dop#2Fs for the largest problem run on a machine. N max the size of the largest problem run on a machine.... ..."

Cited by 321

### Table 3: Highly Parallel Computing

1994

"... In PAGE 4: ... (If you choose to do Gaussian Elimination, partial pivoting must be used.) Compute and report a residual for the accuracy of solution as jjAx ? bjj=(jjAjjjjxjj): The columns in Table3 are de ned as follows: Rmax the performance in G op/s for the largest problem run on a machine. Nmax the size of the largest problem run on a machine.... In PAGE 48: ... The results obtained using Strassen Algorithm are as accurate as that from Gaussian Elimination. The columns in Table3 are de ned as follows: Rmax the performance in G op/s for the largest problem run on a machine. Nmax the size of the largest problem run on a machine.... ..."

Cited by 321

### Table 3: Highly Parallel Computing

"... In PAGE 4: ... (If you choose to do Gaussian Elimination, partial pivoting must be used.) Compute and report a residual for the accuracy of solution as jjAx ? bjj=(jjAjjjjxjj): The columns in Table3 are de ned as follows: Rmax the performance in G op/s for the largest problem run on a machine. Nmax the size of the largest problem run on a machine.... In PAGE 36: ... ** The IBM GF11 is an experimental research computer and not a commercial product. The columns in Table3 are de ned as follows: Rmax the performance in G op/s for the largest problem run on a machine. Nmax the size of the largest problem run on a machine.... ..."

### Table 3: Highly Parallel Computing

"... In PAGE 4: ... (If you choose to do Gaussian Elimination, partial pivoting must be used.) Compute and report a residual for the accuracy of solution as jjAx ? bjj=(jjAjjjjxjj): The columns in Table3 are de ned as follows: Rmax the performance in G op/s for the largest problem run on a machine. Nmax the size of the largest problem run on a machine.... In PAGE 54: ... The results obtained using Strassen Algorithm are as accurate as that from Gaussian Elimination. The columns in Table3... ..."

### Table 3: Highly Parallel Computing

"... In PAGE 4: ... (If you choose to do Gaussian Elimination, partial pivoting must be used.) Compute and report a residual for the accuracy of solution as jjAx ? bjj=(jjAjjjjxjj): The columns in Table3 are de ned as follows: Rmax the performance in G op/s for the largest problem run on a machine. Nmax the size of the largest problem run on a machine.... In PAGE 54: ... The results obtained using Strassen Algorithm are as accurate as that from Gaussian Elimination. The columns in Table3... ..."

### Table 3: Highly Parallel Computing

"... In PAGE 4: ... (If you choose to do Gaussian Elimination, partial pivoting must be used.) Compute and report a residual for the accuracy of solution as jjAx ? bjj=(jjAjjjjxjj): The columns in Table3 are de ned as follows: Rmax the performance in G op/s for the largest problem run on a machine. Nmax the size of the largest problem run on a machine.... In PAGE 66: ... It is based on vector processors that are manufactured by NEC. The columns in Table3 are de ned as follows: Rmax the performance in G op/s for the largest problem run on a machine. Nmax the size of the largest problem run on a machine.... ..."