### Table 1: Performances in the toy problem. The results for kernel shaping were obtained using 200 gradient descent steps with step size = 0:2.

2000

"... In PAGE 5: ... The shape matrix L has maximal rank l = 5 in this experiment. Our results for local linear regression using both a xed spherical kernel and kernel shaping are summarized in Table1 . Performance is measured in terms of the mean R2-value of the 20 models, and standard deviations... In PAGE 6: ... The results for kernel shaping were obtained using 200 gradient descent steps with step size = 0:2. The results in Table1 indicate that the optimal performance on the test set is obtained using the parameter values = 2% both for kernel shaping (R2 = 0:909) and and for the spherical kernel (R2 = 0:293). Given the large di erence between the R2 values, we conclude that kernel shaping clearly outperforms the spherical kernel on this data set.... ..."

Cited by 7

### Table 1. Performances in the rst toy example. The results for kernel shaping were obtained using 200 gradient descent steps. Kernel Training Test

2000

"... In PAGE 10: ... The shape matrix L has maximal rank l = 5 in this experiment. Our results for local linear regression using both a xed spherical kernel and kernel shaping are summarized in Table1 . Performance is measured in terms of the mean R2-value of the 20 models, and standard deviations are reported in parenthesis.... In PAGE 10: ... Performance is measured in terms of the mean R2-value of the 20 models, and standard deviations are reported in parenthesis. The results in Table1 indicate that the optimal performance on the test set is... In PAGE 12: ... Note that we make no use of the fact that really a rank of two would be su cient to represent the Mexican Hat function. The results of our experiments are summarized in Table1 and in Figure 5, which are organized as in Section 6.1.... ..."

Cited by 7

### Table 2. Performances in the second Mexican Hat example. The results for kernel shaping were obtained using 200 gradient descent steps. Kernel Training Test

2000

Cited by 7

### Table 3: Results of the shape Dataset.

"... In PAGE 5: ... 5.1 Shape Dataset Table3 shows the results on the shape data set us- ing di erent learning rates. The rst line shows that for the Shape data set, the basic GP approach with- out using the gradient descent algorithm ( is 0.... ..."

### Table 3. Results using the Abalone database after 200 gradient descent steps.

2000

"... In PAGE 14: ... To prevent possible artifacts resulting from the order of the data records, we randomly draw 2784 observations as a training set and use the remaining 1393 observations as a test set. Our results are summarized in Table3 using various settings for the rank l, the equivalent fraction parameter , and the gradient descent step size . The optimal choice for is 20% both for kernel shaping (R2 = 0:582) and for the spherical kernel (R2 = 0:572).... ..."

Cited by 7

### Table 2: Results using the Abalone database after 200 gradient descent steps.

2000

"... In PAGE 6: ... To prevent possible artifacts resulting from the order of the data records, we randomly draw 2784 observations as a training set and use the remaining 1393 observations as a test set. Our results are summarized in Table2 using various settings for the rank l, the equivalent fraction parameter , and the gradient descent step size . The optimal choice for is 20% both for kernel shaping (R2 = 0:582) and for the spherical kernel (R2 = 0:572).... ..."

Cited by 7

### Table 1. Estimates of the cone shape with the different techniques.

2000

"... In PAGE 5: ... To ensure the unity of the normal vector [nx; ny; nz]T we introduce the penalty function: Cunit(~ p) = (~ pT U~ p ? 1)2 where U is an appropriate matrix. The optimization func- tion (17) is thus set up as follows: ~ pT H~ p + 1Cunit(~ p) + 2Ccirc(~ p) The results obtained with the different techniques are grouped in Table1 except for the AED technique since a cone surface does not have a constant gradient value. The AD technique gives an elliptic cone, whereas the SR and GF ensures a faithful shape estimate and relatively better accuracy with the GF.... In PAGE 6: ... Figure 9 illustrates the difference where the bias in the shape estimates is expressed in terms of the (minor axis/major axis) ratio. The same aspect is no- ticed for the cones if we compare the cone estimates for object 1 ( Table1... ..."

Cited by 10

### Table 1. Estimates of the cone shape with the different techniques.

2000

"... In PAGE 5: ... To ensure the unity of the normal vector a58 a97 a98 a6a62a97 a101 a6a9a97 a102 a66 a85 we introduce the penalty function: a43 a22 a17 a55 a5a4 a2a40a13 a15a42a16a28a17 a2a60a13 a15 a85 a6 a13 a15 a91 a89 a16 a21 where a6 is an appropriate matrix. The optimization func- tion (17) is thus set up as follows: a13 a15 a85 a0 a13 a15 a22 a81 a73 a43 a22 a17 a55 a7a4 a2a40a13 a15a42a16 a22 a81 a21 a43 a27 a55 a25 a27 a2a60a13 a15a42a16 The results obtained with the different techniques are grouped in Table1 except for the AED technique since a cone surface does not have a constant gradient value. The AD technique gives an elliptic cone, whereas the SR and GF ensures a faithful shape estimate and relatively better accuracy with the GF.... In PAGE 6: ... Figure 9 illustrates the difference where the bias in the shape estimates is expressed in terms of the (minor axis/major axis) ratio. The same aspect is no- ticed for the cones if we compare the cone estimates for object 1 ( Table1... ..."

Cited by 10

### Table 6.4 Comparisons for Modal Frequencies Between the Experimental Results and Modal Analysis Results After Fine Tuning.

1999

Cited by 1

### Table 4. Abundance gradients in NGC 300 Source Abundance gradient

"... In PAGE 6: ...reasing abundance with growing radius (Fig. 4). In Table 3 the calculated abundances for each region are given as well as the isophotal radii. The calculated abundance gra- dients are given in Table4 together with results for oxygen from previous work. There is a ne agreement between the results.... In PAGE 6: ...43 0.37 dex/ o, indicating that the uncertainty of the gradient might very well be greater than that quoted in Table4 . We do not, however, especially mistrust our observations of this region.... ..."