### Table 4: Averages of the MISE ratios for each regression method, broken out by dimension (indicated by p) and model sparseness levels, for the case where function is Linear. Averages were taken over the noise and sample size levels. Asterisks denote cases in which no data were available.

### Table 5: Averages of the MISE ratios for each regression method, broken out by dimension (indicated by p) and model sparseness levels, for the case where function is Gaussian. Averages were taken over the noise and sample size levels. Asterisks denote cases in which no data were available.

### Table 7: Averages of the MISE ratios for each regression method, broken out by dimension (indicated by p) and model sparseness levels, for the case where function is Mixture. Averages were taken over the noise and sample size levels. Asterisks denote cases in which no data were available.

### Table 8: Averages of the MISE ratios for each regression method, broken out by dimension (indicated by p) and model sparseness levels, for the case where function is Product. Averages were taken over the noise and sample size levels. Asterisks denote cases in which no data were available.

### Table 3 shows the EER obtained for different baseline sparse structures. SEM is structural EM. The first column is zeroing the pairs with the minimum values of the corresponding criterion and the second column is zeroing the pairs with the maximum values. The second column is more of a consistency check. If the min entry of criterion A is lower than the min entry of criterion B then the max entry of criterion A should be higher than the max entry of criterion B. By observing Table 3 we can see improved results from the full-covariance case but results are not better than the diagonal-covariance case. For the structural EM, pruning step sizes of 50 and 100 were tested and no difference was observed.

"... In PAGE 7: ... Table3 : EER for different sparse structures, left number is for 30 second test utterances and right number for 3-second. 5 Conclusions In this work the problem of estimating sparse regression matrices of mixtures of Gaussians was addressed.... ..."

### Table 4: EER for different sparse structures, left number is for 30 second test utterances and right number for 3- second.

"... In PAGE 4: ... All structure-finding experiments are with the same num- ber of components and percent of regression coefficients pruned. Table4... In PAGE 5: ...Table 4: EER for different sparse structures, left number is for 30 second test utterances and right number for 3- second. From Table4 we can see improved results from the full-covariance case but results are not better than the diagonal-covariance case. All criteria appear to perform similarly.... In PAGE 5: ... All criteria appear to perform similarly. Table4 also shows that zeroing the regression coefficients with the maximum of each criterion func- tion does not lead to systems with much different perfor- mance. Also from Table 3 we can see that randomly ze- roing regression coefficients performs approximately the same as taking the minimum or maximum.... ..."

### Table 3: Regression Model

"... In PAGE 13: ... A quadratic model is used within a data- fitting problem, with constraints being set on the eigen-decomposition of the quadratic term. Table3 shows the results for one particular sets of constraints.... In PAGE 13: ... As well as the improved run time obtained with MAD we also note that the error in the position of the minimum obtained was 2 orders of magnitude smaller than that obtained using finite-differencing. Interestingly, the bottom row of Table3 shows the mode of AD for the constraints shifting automatically from full, to sparse to compressed as the problem size increases. 5 Conclusions and Future Developments In this paper we have presented the MADJacInternal function which enables auto- mated, performance driven selection of a Jacobian evaluation algorithm via the forward mode fmad class of the MAD package.... ..."

Cited by 2

### Table 1. Classification of algorithms solving the serial spatial auto-regression model Exact Approximate Applying Direct Sparse Matrix Algorithms [25] ML based Matrix Exponential Specification [26] Eigenvalue based 1-D Surface Partitioning [16] Graph Theory Approach [32] Taylor Series Approximation [23] Chebyshev Polynomial Approximation Method [30]

in Comparing exact and approximate spatial auto-regression model solutions for spatial data analysis

2004

"... In PAGE 2: ...A number of researchers who have been attracted to SAR because of its high com- putational complexities have proposed efficient methods of solving the model. These solutions, summarized in Table1 , can be classified into exact and approximate solu-... ..."

Cited by 3

### Table 3: Averages of the MISE ratios for each regression method, broken out by dimension level (indicated by p), for the case where model sparseness sets all variables to be spurious (i.e., the Constant function was used regardless of function level). Averages were taken over the noise and sample size levels. Asterisks denote cases in which no data were available.

1997

"... In PAGE 23: ...Table3 shows this analysis for the case where model sparseness sets all variables to be spurious (i.e.... ..."

Cited by 1

### Table 3: Averages of the MISE ratios for each regression method, broken out by dimension level (indicated by p), for the case where model sparseness sets all variables to be spurious (i.e., the Constant function was used regardless of function level). Averages were taken over the noise and sample size levels. Asterisks denote cases in which no data were available.

"... In PAGE 21: ...Table3 shows this analysis for the case where model sparseness sets all variables to be spurious (i.e.... ..."