### Table 2.1: Notation conventions for multi-label classi cation problems Symbol Description

2002

### Table 12 Experimental results of each multi-label learning algorithm on the web page data sets in terms of coverage.

2007

Cited by 2

### Table 15 Relative performance between each multi-label learning algorithm on the web page data sets.

2007

"... In PAGE 17: ...4401 on the set of all comparing algorithms which are shown in Table 15. As shown in Table15 , Ml-knn achieves comparable results in terms of all the evaluation criteria, where on all these metrics no algorithm has outperformed Ml-knn. On the other hand, although BoosTexter performs quite well in terms of one-error, coverage, ranking loss and average precision, it performs almost worst among all the comparing algorithms in terms of hamming loss.... ..."

Cited by 2

### TABLE II PERFORMANCE ON THE YEAST DATA FOR OTHER MULTI-LABEL LEARNING ALGORITHMS.

### Table 10 Experimental results of each multi-label learning algorithm on the web page data sets in terms of hamming loss.

2007

Cited by 2

### Table 14 Experimental results of each multi-label learning algorithm on the web page data sets in terms of average precision.

2007

Cited by 2

### Table 2: Training/testing time of using binary and pairwise approaches for multi-label problems. Note that the support vector ratio a14

in Abstract

"... In PAGE 4: ...ayer. As they both point to 23, in the third layer we consider only one node. Under the same assumption on the ratio a14 of support vectors, the total testing time is between a11 a23 a10 a6 a14 a6 a10a0 a1a10a24 and a11 a3 a10 a4 a6 a14 a6 a10a0 a1a10 a5 a2 depending on the number of nodes used. Table2 summarizes the training/testing com- plexity of both approaches. The main conclusion is that the pairwise approach has the complexity related to a0 , the average number of labels per sample.... ..."

### Table 2.1. Multi-label shortest path method

### Tables 1 and 2 show the results on multi-labeled documents and uni-labeled docu- ments, respectively. Table 3 shows the Macro F1-measure of the multi-labeled and uni-labeled documents. It is apparent that both systems have very competitive per- formance on the task of multi-class multi-label document classification. From the returned Macro-F1 value, the system based on the sequential document representation performs better than the system based on LSI. On the other hand, LSI works better, where each document is based on a single topic. The experiments show that LSI does work better on the uni-labeled documents on some k values (25 or 125). However, it works worse on the multi-labeled documents whatever the k value is. The encoded sequential document representation can capture the characteristic se- quences for documents and categories. Good performance is achieved by utilizing the sequence information for classification. The results show that it works better for the relatively larger categories, such as Money+Interest , Grain+Wheat and Earn . We conclude the reasons behind this is: This data representation is based on the machine- learning algorithm to capture the characteristic word or word co-occurrences for catego-

in Evaluation of Two Systems on Multi-class Multi-label Document Classification', paper presented to

2005

Cited by 1

### Table 4: Parameters used and a comparision of three multi-label approaches. Note that both pair and binary approaches happen to select the same parameters. The winning approach is bold-faced. For RCV1-V2 and OHSUMED, mean and standard deviation (std) of five training/testing subsets are presented. Moreover, we try five random node arrangements for the pairwise approach, so mean and std of 25 values are presented. BO: Boostexter. Parameters Exact Match Ratio

in Abstract

"... In PAGE 6: ...hem to binary values (i.e., whether a term appears or not) as that is its main feature type for inputing texts. The resulting three performance measures are in Table4 . The proposed pairwise approach is generally better than the binary method.... In PAGE 7: ... In the training data, 72 and 73 always appear together, so the pairwise approach successfuly captures such information. Results of the pairwise approach in Table4 also show that different arrangements of nodes in the DAG do not affect the performance much. The two SVM-based methods outperform Boostexter.... ..."