### Table 1: Comparison of average joint angle prediction error for different models. All kPCA-based models use 6 output dimensions. Testing is done on 100 video frames for each sequence, the inputs are artificially generated silhouettes, not in the training set. 3D joint angle ground truth is used for evaluation. KDE-RR is a KDE model with ridge regres- sion (RR) for the feature space mapping, KDE-RVM uses an RVM. BME uses a Bayesian mixture of experts with no dimensionality reduction. kBME is our proposed model. kPCA- based methods use kernel regressors to compute pre-images.

2005

Cited by 5

### Table 1: Comparison of average joint angle prediction error for different models. All kPCA- based models use 6 output dimensions. Testing is done on 100 video frames for each sequence, the inputs are artificially generated silhouettes, not in the training set. 3D joint angle ground truth is used for evaluation. KDE-RR uses ridge regression (RR) for the feature space mapping, KDE-RVM uses an RVM mapping, finally stand-alone BME uses a Bayesian mixture of experts with no dimensionality reduction for the state prediction a8 a9 a16a12 a15 a13 a24 . kBME is our proposed model. kPCA-based methods use kernel regressors to compute pre-images.

2005

Cited by 5

### Table 3: Support vectors used by single kernel and multiple kernel SV regression. MSES denotes the relative mean squared error maintained, SVS and SVM are the proportion of support vectors needed by the single kernel and multiple kernel SV regression, respectively. The type of kernels and their parameters are listed in Table 2. Data

"... In PAGE 13: ...In addition to improvement in prediction accuracy, we found using multiple kernels re- duced the number of support vectors in the SV regression predictions. Table3 shows the number of support vectors needed by each data set to maintain the prediction accuracy achieved by the respective single kernel regressor. As Table 3 suggests, the number of sup- port vectors were reduced by 56% on average.... In PAGE 13: ... Table 3 shows the number of support vectors needed by each data set to maintain the prediction accuracy achieved by the respective single kernel regressor. As Table3 suggests, the number of sup- port vectors were reduced by 56% on average. This SV reduction indicates that using more kernels can describe the data better than using a single kernel.... ..."

### Table 1. Dependent Variables and Regressors

1998

"... In PAGE 2: ..................................................................... 25 Tables Table1 . Dependent Variables and Regressors Table 2.... In PAGE 10: ... The analysis presented here is based on traders that could be located in the two rounds.3 The main characteristics of the surveyed traders are summarized in Table1 . Total sales and gross margin are used as alternative measures of output.... In PAGE 13: ... Section 3. Returns to Social Network Capital The estimation of equation (1) by ordinary least squares is presented in the first column of Tables 3 and 4 using the dependent variables presented in Table1 . The func- tional form used for regression analysis is basically a Cobb-Douglas production function and is estimated in log-log form, i.... ..."

### Table 2: Models with an endogenous regressor.

### TABLE 9 Projected Values of Regressors

### Table 2. Regressor Descriptive Statistics

"... In PAGE 19: ...Felix Landsiedl 19 B. Descriptive Statistics Table2 gives summary descriptive statistics for the dataset resulting from the collection, ordering and filtering procedure described above. After the filtering and data cleaning process described above, we end up with 7,766 intraday realized transaction bid- ask and 31,714 equally-weighted, daily, quoted bid-ask spread observations.... In PAGE 21: ... As expected the DHC are significantly positive in all regression specifications and for both samples. We can infer from the correlations depicted in Table2 that log(DHC+1) are more than 50% larger for the realized spread, as the realized DHC incorporates the total contract volume of each trade whereas the DHC for the quoted spread assumes a transaction volume of 1. Therefore, the estimated parameter for the DHC impact is larger for the QSPR estimation than for the RSPR estimation.... In PAGE 22: ... In the realized spread estimation it has a positive sign but it lacks statistical significance. Looking at the descriptive statistics of Table2 we can see that the average gamma of all quoted spread observations is 0.14 with a standard deviation of 0.... ..."

### Table 3. Regressor Ccorrelation Matrix

"... In PAGE 19: ... This points out that the QSPR sample contains options that are weakly covered by the market makers. Table3 shows the cross-correlations of the independent and the explanatory variables used in the regression equations (14) and (15). The first line reports the correlation for the transaction sample containing 7766 observations and the second line refers to the quoted spread option sample with 31726 observations.... ..."

### Table 2 Correlation Matrix of Basic Regressors

"... In PAGE 21: ... The latter variables, however, are available only for a smaller sample. The correlation matrix of our basic set of regressors in Table2 shows three striking features. First, as mentioned above, all three income distribution indicators are very highly correlated with each other, with correlation coefficients in all cases exceeding .... In PAGE 22: ... For both the full and LDC samples in columns 8 and 9, the main consequence is that the estimated coefficient on growth loses all significance, a finding similar to that reported by Carroll and Weil (1994) when excluding from their sample the East-Asian apos;tigers apos;. In addition, in the LDC sample (column 9) the estimated coefficient on the income distribution indicator becomes larger in absolute value and closer to statistical significance, Is The correlation between real per capita GNP and its square, not presented in Table2... ..."