#### DMCA

## Efficient Multi-label Ranking for Multi-class Learning: Application to Object Recognition

### Cached

### Download Links

Citations: | 11 - 1 self |

### Citations

980 | Visual Categorization with Bags of Keypoints
- Csurka, Bray, et al.
- 2004
(Show Context)
Citation Context ...hould also be noted that there are a total of 10 classes in VOC 2006 set while this number is 20 for VOC 2007. A bag-of-words model is used to represent image content. Following the standard approach =-=[4]-=-, we obtained SIFT descriptors from each image in the dataset and then clustered these feature vectors into 5, 000 clusters by an approximate K-means algorithm [18]. Evaluation metric: Area under the ... |

932 | A comparison of methods for multiclass support vector machines.
- Hsu, Lin
- 2002
(Show Context)
Citation Context ... approaches divide a multi-label learning task into multiple independent binary labeling tasks. The division usually follows one-vsall (OvA), one-vs-one or the general error correction code framework =-=[6, 13, 11]-=-. Most of these approaches suffer from imbalanced data distributions when constructing binary classifiers to distinguish individual classes from the remaining classes. This problem becomes more severe... |

719 | Solving multiclass learning problems via error-correcting output codes. arXiv preprint cs/9501101,
- Dietterich, Bakiri
- 1995
(Show Context)
Citation Context ... approaches divide a multi-label learning task into multiple independent binary labeling tasks. The division usually follows one-vsall (OvA), one-vs-one or the general error correction code framework =-=[6, 13, 11]-=-. Most of these approaches suffer from imbalanced data distributions when constructing binary classifiers to distinguish individual classes from the remaining classes. This problem becomes more severe... |

421 | Online passive–aggressive algorithms.
- Crammer, Dekel, et al.
- 2006
(Show Context)
Citation Context ...osed in [9] for multilabel learning problems. Constraints derived from the multi-labeled instances were used in [9] to enforce that the ranking of relevant classes is higher than the irrelevant ones. =-=[3]-=- improves the computational efficiency of [9] by only considering the most violated constraints. Dekel et al. [5] and Shalev-Shwartz et al. [21] encode the ranking usinga preference graph. In [5] a b... |

277 | Working set selection using the second order information for training SVM”.
- Fan, Chen, et al.
- 2005
(Show Context)
Citation Context ...are repeated several times, and AUC averaged over these runs is reported as the final result. Baseline methods: We compare ranking ability of the proposed method to three baseline methods: (i) LIBSVM =-=[7]-=- implementation of OvA SVM classifier, which is shown to outperform multi-class SVM methods in [11]. (ii) SVMperf [14] that is designed to optimize Area Under ROC Curve (AUC), which are used as the ev... |

192 | A dual coordinate descent method for large-scale linear svm
- Hsieh, Chang, et al.
- 2008
(Show Context)
Citation Context ...ices Γi. Another advantage of this formulation is that no assumptions on the form of these relationships (e.g., pairwise relationship) is made. 5. Efficient algorithm We follow the work of Lin et al. =-=[10]-=- and solve Eq (13) by coordinate descent. At each iteration, we choose one training example (xi, yi) and the related variables αi = (α1 i , . . . , αK i ), while fixing the remaining variables. The re... |

118 | Collective multi- label classification
- Ghamrawi, McCallum
- 2005
(Show Context)
Citation Context ...ulti-label ranking is usually more robust than the classification approaches, particularly when the number of classes is large. Although several algorithms have been proposed for multi-label learning =-=[22, 21, 8, 15]-=-, they are usually computationally expensive because the number of comparisons in multi-label ranking is O(nK2 ), where K is the number of classes and n is the number of training examples. The quadrat... |

107 | Log-linear models for label ranking.
- Dekel, Manning, et al.
- 2003
(Show Context)
Citation Context ...9] to enforce that the ranking of relevant classes is higher than the irrelevant ones. [3] improves the computational efficiency of [9] by only considering the most violated constraints. Dekel et al. =-=[5]-=- and Shalev-Shwartz et al. [21] encode the ranking usinga preference graph. In [5] a boosting based algorithm is used to learn the classifiers from a set of given instances and the corresponding pref... |

66 | Constraint classification for multiclass classification and ranking
- Har-Peled, Roth, et al.
- 2003
(Show Context)
Citation Context ...assigned to a single image. Our experiment with the PASCAL VOC 2006 dataset shows encouraging results in terms of both efficiency and efficacy. 2. Previous work Ranking approach was first proposed in =-=[9]-=- for multilabel learning problems. Constraints derived from the multi-labeled instances were used in [9] to enforce that the ranking of relevant classes is higher than the irrelevant ones. [3] improve... |

53 | Extracting shared subspace for multi-label classification
- Ji, Tang, et al.
- 2008
(Show Context)
Citation Context ...al Bayesian approach is used in [24] to capture the dependency among classes. Overall, these approaches are computationally expensive when the number of classes is large. There are several approaches =-=[17, 12, 25, 20, 16]-=- for multi-label learning which encode the class dependence by assuming the sharing of important features among classes. [12] showed that a shared subspace model outperforms a number of state-ofthe-ar... |

43 | Semi-supervised multi-label learning by solving a sylvester equation
- Chen, Song, et al.
- 2008
(Show Context)
Citation Context ...classifiers from a set of given instances and the corresponding preference graphs. In [21] a generalization of the hinge loss for the preference graphs is used for learning the ranking of classes. In =-=[2]-=-, which presents a semi-supervised algorithm for multi-label learning by solving a Sylvester Equation (SMSE), a graph is constructed to capture the similarities between pair-wise categories. In [19] a... |