### Table 5.3 Superlinear Convergence.

### Table 5.1 Superlinear and linear convergence constants.

2005

Cited by 2

### Table 5.2 Superlinear and linear convergence constants for the advection equation.

2005

Cited by 2

### Table 2c. Superlinear convergence of residuals for Holder curves with r = 3:5, 2:01, N = 256.

"... In PAGE 11: ... (A numerical approximation to f can be used with similar results.) In Table2 a we take A = 1; B = 0:01; l = 2; = 0:5(r = 3:5): The discrete error behaves like O(n?2:5); as can be expected. In Table 2b, we take A = 5; B = 1; l = 1; = :01(r = 2:01): This case is not covered by Theorems 2 and 3.... In PAGE 11: ...) In Table 2a we take A = 1; B = 0:01; l = 2; = 0:5(r = 3:5): The discrete error behaves like O(n?2:5); as can be expected. In Table2 b, we take A = 5; B = 1; l = 1; = :01(r = 2:01): This case is not covered by Theorems 2 and 3. Nevertheless, the discrete error behaves roughly like O(n?1:01).... In PAGE 13: ...5 512 9:5 10?10 6 3.5 1024 2:1 10?10 6 Table2 a.... In PAGE 13: ...01 2048 5:9 10?5 6 2.01 4096 3:0 10?5 6 Table2 b.... ..."

Cited by 1

### Table 3: Verifying superlinear convergence: (a) easy and (b) hard case. In Tables 3.a and 3.b we monitor the progressive decrease in the magnitude of ?kxkk

1995

Cited by 39

### Table 1: Newton method for minimization of the perimeter. In Table 2 we consider the same problem but with evolutive displacement directions. In that case we observe a superlinear rate of convergence.

"... In PAGE 24: ... The initial point will be given by a piecewise perturbation of a circle of radius 1+ . In Table1 we present the results considering the domains with 128 nodes and using xed displacement directions. We follow the evolution of jjG00jj2 - the euclidean norm of the gradient, jj k ? ?jj1 - the distance between the approximation k and the exact solution ? (which is a circle), jj k ? ~ ?jj1 - the distance between the k and ~ ? (the solution in the set of polygons with 128 nodes) the area and di erence of perimeters.... ..."

### Table 2-4 indicate that small numbers of iterations are required to compute an accurate solution. Moreover, the number of iterations changes slightly with the problem size. Ta- ble 5 illustrates performance of N-Weiszfeld on degenerate problems where the number of coinciding locations is greater than n. Asymptotically, superlinear convergence is observed for results in Table 2-4, while linear convergence is occasionally observed in Table 5, due to degeneracy.

1996

"... In PAGE 15: ....2. Large-scale Random Multifacility Location Problems. We now report some computational results of the proposed N-Weiszfeld algorithm on random multifacility lo- cation (RMFL) problems (1). Each iteration number entry in Table2 -5 is an average of 10 random test problem instances. Both the best and the worst computed optimality accuracy... In PAGE 16: ...8 opt (10?14; 10?7) (10?5; 10?14) (10?7; 10?13) (10?8; 10?13) deg (10?3; 10?3) (10?4; 10?2) (10?4; 10?3) (10?4; 10?3) TABLE 3 Random Multifacility Location Problems on a Plane with 120 Existing Facilities wij 1000wij for i = 1 : 2 : nloc and j = 1 : 2 : nloc; and; !ij = 100jrandj + 1: In this experiment, we have fixed the number of new facilities and varied the number of existing facilities. The dimension of the matrix A for Table2 ranges from 20-by-1090 to 20-by-6090. In Table 2, the worst accuracy obtained is 10?8 and the best is 10?15.... In PAGE 16: ... The dimension of the matrix A for Table 2 ranges from 20-by-1090 to 20-by-6090. In Table2 , the worst accuracy obtained is 10?8 and the best is 10?15. RMFL 2.... ..."

Cited by 6

### Table 6.3 corresponds to the convergence history of Algorithm 2.1 with fixed h. Obviously, the algorithm converges superlinearly for fixed h. Combining the observations of this be- havior with the behavior with respect to decreasing h, we infer that the superlinear rate of convergence does not deteriorate with respect to decreasing h. Moreover, independently of the mesh-size h Algorithm 2.1 requires 5 iterations until its successful termination. The latter behavior is known as strong mesh independence (see [1]) which numerically augments our theoretical results.

2003

### Table 5.5: Algorithm 5.11 applied to #285.5#29. x#3E0 =#5B1; 2#5D, #160 =1 The solution is found without problems, and the columns with f and kf 0k show superlinear convergence, as de#0Cned in #282.6#29. Example 5.5. Wehave used Algorithm 5.11 on Rosenbrock apos;s function from Example 4.3. We use the same starting point, x0 =#5B,1:2; 1#5D#3E, and with #160 =1, quot;1 =10,10, quot;2 =10,12 we found the solution after 29 iteration steps. The performance is illustrated below

### Table 5.5: Algorithm 5.11 applied to (5.5). x gt;0 =[1; 2], 0 =1 The solution is found without problems, and the columns with f and kf 0k show superlinear convergence, as de ned in (2.6). Example 5.5. We have used Algorithm 5.11 on Rosenbrock apos;s function from Example 4.3. We use the same starting point, x0 = [ ?1:2; 1] gt;, and with 0 = 1, quot;1 = 10?10, quot;2 = 10?12 we found the solution after 29 iteration steps. The performance is illustrated below

1999