### Table 2.2: The scalar magnetic potentials for axially invariant, cylindrically symmetric geometry in both polar and cartesian coordinates.

2001

Cited by 3

### Table 2: Convergence of source magnetic energies computed by several methods (values in Joule)

"... In PAGE 23: ... For the nest nite volume sub- division, the relative errors are approximately 1 1:5 10?4 in the most remote positions with respect to the source region, approximately 1 10?3 in the geometry core (the small cylinder) and only 2 3 10?2 near and inside the source region. Global results are shown in Table2 , which reports estimates of the source magnetic energies W 0 in the whole domain, the current-carrying conductor and the central cylindri- cal core, depending on the method (a), (b) or (c) used for the computation of Hs. The reported values for method (b), in this Table and in the successive ones, refer to a step size h = 0:02 in the composite trapezoidal integration.... ..."

### Table 1. Material parameters in di erent regions. ? refers to the xy-plane.

"... In PAGE 4: ... The geometry is described in detail in [5]. The material properties are listed in Table1 . Both the electric conductivity and the magnetic permeability are anisotropic.... ..."

### Table Magnet

### Table 10 shows GTC performance results for a simulation comprising of 4 million particles and 1,187,392 grid points over 200 time steps. The geometry is a torus described by the configuration of the magnetic field. On a single processor, the Power3 achieves 10% of peak, while the Power4 performance represents only 5% of its peak. The SX-6 single-processor experiment runs at 701 Mflops/s, or only 9% of its theoretical peak. This poor SX-6 performance is unexpected, considering the relatively high AVL and VOR values. We believe this is because the scalar units need to compute the indices for the scatter/gather of the underlying unstructured grid. However, in terms of raw performance, the SX-6 still outperforms the Power3/4 by factors of 4.6 and 2.5, respectively.

in Evaluation of cache-based superscalar and cacheless vector architectures for scientific computations

2004

"... In PAGE 16: ... Table10 : Performance of GTC on a 4-million particle simulation. Parallel results demonstrate that scaling on the SX-6 is not nearly as good as on the Power3/4.... ..."

Cited by 2

### Table 9 shows GTC performance results. The simulation in this study comprises 4 million particles and 301,472 grid points. The geometry is a torus described by the configuration of the magnetic field. On a single processor the Power3 sustains 153 Mflops/s (10% of peak), while the 277 Mflops/s achieved on the Power4 represents only 5% of its peak performance. The SX6 single-processor experiment runs at 701 Mflops/s, or only 9% of its theoretical peak. This poor performance is unexpected, considering the relatively high AVL (180) and VOR (97%). We believe this is because the scalar units need to compute the indices for the scatter/gather of the underlying unstructured grid. However, the SX6 still outperforms the Power3/4 by factors of 4.6 and 2.5, respectively.

in Evaluation of Cache-based Superscalar and Cacheless Vector Architectures for Scientific Computations

2004

"... In PAGE 10: ... Table9 : Performance of GTC on a 4-million particle simulation. 10 Molecular Dynamics: Mindy Mindy is a simplified serial molecular dynamics (MD) C++ code, derived from the parallel MD program NAMD [3].... ..."

Cited by 2

### Table 10 shows GTC performance results for a simulation comprising of 4 million particles and 1,187,392 grid points over 200 time steps. The geometry is a torus described by the configuration of the magnetic field. On a single processor, the Power3 achieves 10% of peak, while the Power4 performance represents only 5% of its peak. The SX-6 single-processor experiment runs at 701 Mflops/s, or only 9% of its theoretical peak. This poor SX-6 performance is unexpected, considering the relatively high AVL and VOR values. We believe this is because the scalar units need to compute the indices for the scatter/gather of the underlying unstructured grid. However, in terms of raw performance, the SX-6 still outperforms the Power3/4 by factors of 4.6 and 2.5, respectively.

in Evaluation of cache-based superscalar and cacheless vector architectures for scientific computations

2004

"... In PAGE 16: ... Table10 : Performance of GTC on a 4-million particle simulation. Parallel results demonstrate that scaling on the SX-6 is not nearly as good as on the Power3/4.... ..."

Cited by 2

### Table 12 shows the results of single-processor GTC runs. Only the serial version has been vectorized at this time; however, recent results demonstrate that GTC scales well on massively parallel architectures. The simulation in this study comprises 4 million particles and 301,472 grid points. The geometry is a torus described by the configuration of the magnetic field. The Power3 sustains 174 Mflops/s (12% of peak), while the 304 Mflops/s achieved on the Power4 represents only 6% of its peak performance. The SX6 experiment runs at 716 Mflops/s, or only 9% of it theoretical peak. This poor performance is unexpected, considering the relatively high AVL (180) and VOR (97%). We believe this is because the scalar units need to compute the indices for the scatter/gather of the underlying unstructured grid. However, the SX6 still outperforms the Power3/4 by factors of 2.7 and 5.3, respectively.

2005

"... In PAGE 14: ... Table12 : Serial performance of GTC on a 4-million-particle simulation 11 Molecular Dynamics: Mindy Mindy is a simplified serial molecular dynamics (MD) C++ code, derived from the parallel MD program NAMD [3]. The energetics, time integration, and file formats are identical to those used by NAMD.... ..."

Cited by 1

### Table 1. The time to solve the problem on each level and relative error in the computed magnetic energies using LL and piecewise linear polynomial elements. The reference values are from two dimensional computations done at ABB, see [4].

"... In PAGE 15: ...igure 2. The geometry of the axisymmetric problem. The dimensions are given in meters. Table1 gives the errors in energy, r, and the solution times, ts, on each level of the computation. The error is de ned as r = kWref ? Whk kWrefk ; (5.... ..."