### Table 1: A thousand simulations were carried out for each combination of functions and error distributions. Our sample size was ny = 50, with ti = i=50. We estimated y at i=500; i = 1; ; 500, using a cubic smoothing spline, with the smoothing parameter chosen by generalized cross-validation. Our standard errors (SE apos;s) have been rounded to the nearest one thou- sandth.

2000

Cited by 4

### Table 2: A thousand simulations were carried out for each combination of functions and error distributions. Our sample size was ny = 100, with ti = i=100. We estimated y at i=500; i = 1; ; 500, using a cubic smoothing spline, with the smoothing parameter chosen by generalized cross-validation. Our standard errors (SE apos;s) have been rounded to the nearest one thou- sandth.

2000

Cited by 4

### Table 2 As Table 1 but for spline-smoothed recalibrated predictions.

1990

"... In PAGE 16: ... To distinguishit from theearlierpolygonalG*, we shalldenotethe spline smoothed recalibratingfunctionby G**. Therecalibratedpredictionsarethen _i**(t) = Gi**[ _:i(t)] (10) Table2 showsthe u-plot and y-plot Kolmogorov distancesfor the samedata setsas thoseusedin Table1. It canbeseenthattheentriesin thetwo tablesarevery similar.... ..."

Cited by 18

### Table 2. Titanium Heat Data: Spline Smoothing ~t0

"... In PAGE 19: ... Titanium Heat Data: Spline Smoothing ~t0 RCSP-Ka-ED RCSP-GP-OD RSP-Ka-ED t5 675:0 7:975133 E+02 7:822991 E+02 5:959958 E+02 t6 755:0 8:110142 E+02 7:947857 E+02 6:109336 E+02 t8 875:0 8:751572 E+02 8:755310 E+02 8:767428 E+02 t9 915:0 8:810366 E+02 8:804978 E+02 8:816339 E+02 t11 1015:0 9:625000 E+02 9:625000 E+02 9:625000 E+02 kFk 1:027722 E+00 3:469246 E-01 3:460394 E-01 3:544604 E-01 steps 7 13 9 time (ms) 149 280 143 jFTJsj 2:491557 E-03 3:592591 E-11 5:363720 E-10 kJTFk 2:797501 E-03 2:988107 E-03 1:749176 E-03 Ret. Code 4 3 4 Table2 shows the results of spline smoothing with free knots using di erent methods. The name RSP-Ka-ED (reduced smoothing problem, Kaufman model, exact derivatives) denotes a method from [SS95] for unconstrained spline smoothing with free knots.... ..."

### Table 1. Bias for Three Smoothing Strategies (Whole Brain) GCV-Spline SPM-HRF No Smoothing

2003

"... In PAGE 18: ...s the SPM-HRF was a computational constraint (Friston et al., 2000). The results from the comparison of GCV-spline smoothing with the SPM-HRF and no smoothing of the simulated data show that optimal spline smoothing of each time series is, on average, signi cantly less biased than smoothing all time series with an identical SPM- HRF kernel or ignoring residual autocorrelations. The mean bias reported in Table1 for the SPM-HRF is deceptive in the context of fMRI studies since the majority of voxels with negative bias are located in regions other than grey matter. These negative bias voxels shift the mean bias closer to zero.... ..."

Cited by 1

### Table 1: Errors and convergence rates with and without C1-spline smoothing for BVP (6).

"... In PAGE 5: ... Consider the two point boundary-value problem ?u00(x) + 2u(x) = 2 2 sin x; x 2 (0; 1); u(0) = u(1) = 0; (6) with solution u(x) = sin x. As computational grids Xk we take 2k+3 + 1 uniformly spaced points on [0; 1] as indicated in Table1 . Even though we are dealing with a one-dimensional problem we use the radial basis function 5;3 as de ned in Eqn.... In PAGE 6: ... We do this in order to keep the bandwidth of the system matrices constant (note that the mesh size is also halved in each iteration). The rightmost column of Table1 shows the percentage of nonzero entries in the system matrix at each level. Columns 2 and 3 of Table 1 show how the multilevel collocation al- gorithm performs without the recommended smoothing.... In PAGE 6: ... The rightmost column of Table 1 shows the percentage of nonzero entries in the system matrix at each level. Columns 2 and 3 of Table1 show how the multilevel collocation al- gorithm performs without the recommended smoothing. Clearly, it ceases to converge after the rst 4 steps.... In PAGE 6: ... In Fasshauer amp; Jerome[5] the smoothing speeds tk were de ned as tk = k, where the three parameters , and can be chosen by the user (with some dependency on the smoothness of the problem). For the results of Table1 we have used = 10, = 1:1, and = 1:2. The reader can see at least some bene ts of the smoothing in this example since the errors (as well as the convergence rates) are better for the rst few steps of smoothing.... In PAGE 7: ... Again, we use the RBF 5;3. The arrangement of the information in Table 2 is the same as in Table1 . However, the smoothing operation is now performed by convolving with the Gauss-Weierstrass kernel t(x) = t2 (4 )e?t2kxk2 4 ; x 2 IR2 : The parameters determining the smoothing speeds are = 1:2, = 1:5, and = 1:9.... ..."

### TABLE I PRIMARY FUNCTIONS AND OPERATORS FOR SMOOTHING SPLINE ESTIMATION AND THE MODELING OF FRACTAL PROCESSES

2007

Cited by 1

### TABLE I PRIMARY FUNCTIONS AND OPERATORS FOR SMOOTHING SPLINE ESTIMATION AND THE MODELING OF FRACTAL PROCESSES

2007

Cited by 1