### Table 1 Average document perplexities (PP) for the French database. SOM0 is SOM trained with 0-neighborhood (equivalent to an on-line adaptive version of the classical K-means clus- tering) and SOMb a larger SOM (600 units). For clustered systems (SOM) the smoothed model is made using the weighted average of 10 best-matching clusters (as explained in Section 2). For the non-clustered methods (RM, SVD) the weighted average of 20 best-matching actual document vectors is used for smoothing. When no clustering was used independent test data could be simulated by ignoring the current document to give perplexities 1.94 and 2.46 for RM and SVD, respectively. Index PP

1999

"... In PAGE 8: ... The rst database has French speaking news and in the decoding used here the WER was high and varied a lot between di erent sections. The average perplexity results in Table1 indicate that the more smooth- ing is applied, the higher is the perplexity on the training data. (Smaller neighborhood and larger number of SOM units imply less smoothing).... In PAGE 9: ...valuation results for the TREC test set. AP is the average precision, RP the R-precision. \THISL default quot; is a baseline index without LSA and \perfect quot; is an index based on the correct transcriptions. As in Table1 , the simulated test data perplexity gave 2.7 and 1.... ..."

Cited by 14

### Table 2: Experimental results of Semantic LAC compared to LAC and K-Means LAC Sem LAC K-Means

"... In PAGE 5: ...2 Results We ran LAC 6 times on all the datasets for 1=h = 1; : : : ; 6. Table2 lists the average error rate, standard deviation, and the minimum error rate ob- tained by running Semantic LAC on all the datasets, along with the corresponding results of the LAC algo- rithm, and K-Means as baseline comparison. Figures 1 and 2 illustrate the error rates of Semantic LAC and LAC as a function of the h parameter values for the Classic3 (3%) and NewsGroup/Medical-Electronic, re- spectively.... In PAGE 7: ... better error rates in less than six iterations. Particu- larly, the results reported in Table2 are executed with the maximum threshold for the number of iterations set to flve. This demonstrate the potential of using local kernels that embed semantic relations as a similarity measure.... ..."

### Table1: Summary of Classic3 experimental setup and specifications

"... In PAGE 3: ... Three datasets namely, MEDLINE, CISI and CRANFIELD were used for experiments. Table1 details the experimental setup used.The value k of NCV matrix columns or k clusters of Spherical K-means was initially considered to be 5% of the total number of documents in the dataset.... ..."

### Table 3: Unsupervised k-means clustering of the Tzanetakis data set. Rows correspond to model categories and columns correspond to clusters.

"... In PAGE 8: ... The clustering results are shown in Table 3. An examination of Table3 shows some interesting patterns. The instances belonging to classical, country, disco, metal and rock were all each assigned in their entirety to a single cluster, and almost all reggae classes were assigned to the same cluster.... ..."

### TABLE 15. CVT Reaction Bottleneck of R1 in SES and ESP Approximations at 298 K

1999

Cited by 1

### Table 1: The classical axioms

1996

"... In PAGE 4: ... a-free) expressions. Examples of equations in the theory S are those in Table1 , called the classical axioms by Conway [6, page 25], and the laws xy = yx (x + y) = x y : Unlike the classical axioms, the laws above only hold under the assumption that the alphabet is a singleton. The following identity is an easy consequence of the classical axioms: 0 = 1 : (1) An example of an equation that is contained in E, but not in S, is a + x = a :... In PAGE 6: ...Table1 . Instantiating these equations, we derive that EF proves all of the equalities C14:n(a).... In PAGE 8: ... In that case, we shall simply write Mp[[P ]] for the denotation of P in the algebra Mp. It is not hard to see that the equations C1{13 in Table1 are sound in the algebra Mp. We now proceed to show that the algebra Mp meets the requirements P1 and P2 that we set out to achieve.... ..."

Cited by 8

### Table 1: Classical approaches

"... In PAGE 3: ... Of course, the NoP can become a crucial factor, since a large number results in high test costs. In Table1 the name of the benchmark is given in the first column followed by the number of inputs and outputs in column two and three, respectively. The number of literals, the number of paths and the PDFC are given in column lits, NoP and PDFC, respectively.... ..."

### Table 1: Classical algorithm

### Table 1: Comparison of p-QR, p-Kmeans, and K-means for two-way clustering Newsgroups p-QR p-Kmeans K-means

2001

"... In PAGE 7: ...xample 1. In this example, we look at binary clustering. We choose 50 random document vectors each from two newsgroups. We tested 100 runs for each pair of newsgroups, and list the means and standard deviations in Table1 . The two clustering algorithms p-QR and p-Kmeans are comparable to each other, and both are better and sometimes substantially better than K-means.... ..."

Cited by 63

### Table 1: Comparison of p-QR, p-Kmeans, and K-means for two-way clustering Newsgroups p-QR p-Kmeans K-means

2001

"... In PAGE 6: ...xample 1. In this example, we look at binary clustering. We choose 50 random document vectors each from two newsgroups. We tested 100 runs for each pair of newsgroups, and list the means and standard deviations in Table1 . The two clustering algorithms p-QR and p-Kmeans are comparable to each other, and both... ..."

Cited by 63