### Table 3: Occlusion results when considering sun- glasses. Note that in this case, NMF only is better when using a high dimensional feature space and no lighting conditions are considered. When light- ing conditions are considered, Bayesian approach obtains the best recognition rates.

### Table 3: F1 and number of Support Vectors for top two Medline queries 5 Conclusions The paper has presented a novel kernel for text analysis, and tested it on a catego- rization task, which relies on evaluating an inner product in a very high dimensional feature space. For a given sequence length k (k = 5 was used in the experiments reported) the features are indexed by all strings of length k. Direct computation of

2002

Cited by 199

### Table 2. Various methods and algorithms mentioned in section 3.1 and their ability to confront effectively the issues mentioned in the same section (Incremental Updates, Performance in Text Classification Tasks, High Dimensionality, Low Computational Cost, Concept Drift, Dynamic Feature Space).

"... In PAGE 7: ...7 complexity for training the filtering models, updating them and providing recommen- dations. In Table2 , we summarize the basic characteristics of the aforementioned systems in terms of the issues discussed in this section. Table 2.... ..."

### Table 1. Types of kernel functions

2000

"... In PAGE 3: ...) , ( i x x K G114 G114 . Table1 shows three typical kernel functions [8]. An optimal hyperplane is constructed for separating the data in the high-dimensional feature space.... ..."

Cited by 23

### Table 1. Types of kernel functions

2000

"... In PAGE 3: ... If the two classes are non-linearly separable, the input vectors should be nonlinearly mapped to a high- dimensional feature space by an inner-product kernel function ) , ( i x x K . Table1 shows three typical kernel functions [8]. An optimal hyperplane is constructed for separating the data in the high-dimensional feature space.... ..."

Cited by 23

### Table 1. Types of kernel functions

2000

"... In PAGE 3: ... If the two classes are non-linearly separable, the input vectors should be nonlinearly mapped to a high- dimensional feature space by an inner-product kernel function ) , ( i x x K G114 G114 . Table1 shows three typical kernel functions [10]. An optimal hyperplane is constructed for separating the data in the high-dimensional feature space.... ..."

Cited by 31

### Table 1. Video genre classification accuracy with Decision Trees, CZ-Nearest Neighbours and SVM clas- sifiers.

"... In PAGE 5: ...igure 4. Energy histograms in subband 6. classifier maps an input space into a high dimensional feature space through some mapping function and then constructs the optimal separating hyperplane in the high dimensional feature space [3]. A summary of our exper- imental results is provided in Table1 . It tabulates results for each clip size and each classifier in terms of the num- ber of clips correctly and erroneously classified, and the resulting classification accuracy, which is measured as the ratio of number of audio clips correctly classified to the total number of clips.... ..."

### Table 2: A sample data set illustrates clusters embedded in subspaces of a high dimensional space.

2003

"... In PAGE 2: ... Hence, a good subspace clustering algorithm should be able to find clusters and the maximum associated set of dimensions. Consider, for example, a data set with 5 data points of 6 dimensional(given in Table2 ). In this data set, it is obvious that C = {x1, x2, x3} is a cluster and the maximum set of dimensions should be P = {1, 2, 3, 4}.... In PAGE 3: ...here sj is a vector defined as sj = (Aj1, Aj2, ..., Ajnj)T. Since there are possibly multiple states(or values) for a vari- able, a symbol table of a data set is usually not unique. For example, for the data set in Table2 , Table 3 is one of its symbol tables. BC BS A A A A B B B B C C C C D D D D BD BT Table 3: One of the symbol tables of the data set in Table 2.... In PAGE 3: ... For a given symbol table of the data set, the frequency table of each cluster is unique according to that symbol table. For example, for the data set in Table2 , let (C, P) be a subspace cluster, where C = {x1, x2, x3} and P = {1, 2, 3, 4}, if we use the symbol table presented in Table 3, then the corre- sponding frequency table for the subspace cluster (C, P) is given in Table 4. From the definition of frequency fjr in Equation (6), we have the following equalities: nj CG r=1 fjr(C) = |C|, j = 1, 2, .... ..."

Cited by 4

### Table 1. The Isomap algorithm takes as input the distances dX(i,j) between all pairs i,j from N data points in the high-dimensional input space X, measured either in the standard Euclidean metric (as in Fig. 1A) or in some domain-speci c metric (as in Fig. 1B). The algorithm outputs coordinate vectors yi in a d-dimensional Euclidean space Y that (according to Eq. 1) best represent the intrinsic geometry of the data. The only free parameter (e or K) appears in Step 1.

"... In PAGE 2: ... These approxima- tions are computed efficiently by finding shortest paths in a graph with edges connect- ing neighboring data points. The complete isometric feature mapping, or Isomap, algorithm has three steps, which are detailed in Table1 . The first step deter- mines which points are neighbors on the manifold M, based on the distances dX(i,j) between pairs of points i,j in the input space X.... ..."

### Table 1. The Isomap algorithm takes as input the distances dX(i,j) between all pairs i,j from N data points in the high-dimensional input space X, measured either in the standard Euclidean metric (as in Fig. 1A) or in some domain-speci c metric (as in Fig. 1B). The algorithm outputs coordinate vectors yi in a d-dimensional Euclidean space Y that (according to Eq. 1) best represent the intrinsic geometry of the data. The only free parameter (e or K) appears in Step 1.

"... In PAGE 2: ... These approxima- tions are computed efficiently by finding shortest paths in a graph with edges connect- ing neighboring data points. The complete isometric feature mapping, or Isomap, algorithm has three steps, which are detailed in Table1 . The first step deter- mines which points are neighbors on the manifold M, based on the distances dX(i,j) between pairs of points i,j in the input space X.... ..."