### Table 6. Inline vs. method call in SVM decision function

### Table 3 Comparison of EER and minDCF for di erent systems on the 2003 NIST SRE 1sp limited data evaluation

2006

"... In PAGE 23: ... Since all systems use T-norm, no further normalization of scores is required. Figure 5 and Table3 show the results of fusion. In the table, minDCF stands for minimum decision cost function where the cost function is given by (27).... In PAGE 24: ...ig. 5. NIST 2003 1sp limited data fusion results feature extraction has been tuned for a GMM; further research into optimizing features for the SVM approach should be explored. Another point to make about Figure 5 and Table3 is the relative performance of the GMM and SVM. The GMM system uses a background data set, features (MFCCs), and TNorm which have been extensively optimized for performance.... ..."

Cited by 7

### Table 1. Chosen decision variables, their limiting and initial values and initial value of the objective function.

2005

"... In PAGE 6: ...ecision variables. Other improvements have also been made for 9 and 11 decision variables with similar results (see Ref. [10]). Table1 shows the chosen decision variables, their limiting and initial values and the initial value of the objective function. A case number is assigned to each set of initial values of the decision variables.... ..."

### Table 1. Commonly used SVM kernel functions and their parameters

2004

"... In PAGE 4: ... The kernel functions can sometimes be categorized as local kernels (Gaussian, KMOD) and global ker- nels (linear, polynomial, sigmoidal) where local kernels attempt to measure the proximity of data samples and are based on a distance function rather than dot-product based global kernels. Table1 lists the kernel expressions and cor- responding parameters. Note that lt; x, y gt; represents dotproduct, where x and y denote two arbitrary feature vectors.... In PAGE 4: ... In addition, each of the kernel functions have varying number of free parameters which can be selected by the teacher. As can be seen from Table1 , the performance of an SVM using linear, Gaussian, or polynomial kernels is dependent upon one, two, and four parameters respectively. All the kernels share one common parameter C, the constant of constraint violation which observes the occurring of a data sample on the wrong side of the decision boundary.... ..."

Cited by 1

### Table 1. Video genre classification accuracy with Decision Trees, CZ-Nearest Neighbours and SVM clas- sifiers.

"... In PAGE 5: ...igure 4. Energy histograms in subband 6. classifier maps an input space into a high dimensional feature space through some mapping function and then constructs the optimal separating hyperplane in the high dimensional feature space [3]. A summary of our exper- imental results is provided in Table1 . It tabulates results for each clip size and each classifier in terms of the num- ber of clips correctly and erroneously classified, and the resulting classification accuracy, which is measured as the ratio of number of audio clips correctly classified to the total number of clips.... ..."

### Table 4: Limited decision structure

### TABLE III. COMPARISON WITH SVM AND DECISION TREE

### TABLE VI NETWORK DELAY AS A FUNCTION OF LOAD

### Table 1. SAN limitations and constraints.

2002

"... In PAGE 5: ... For example, in Linux the attribute ((section ( GLOBAL DATA ))) has similar functionality. Finally, CableS does not attempt to deal with the restric- tions on the amount of memory that can be registered and pinned because these are issues that are better dealt at the NIC level ( Table1 ). However, this work is beyond the scope of this paper, which focuses at SVM library level issues.... ..."

Cited by 6