#### DMCA

## Dynamic Label Propagation for Semi-supervised Multi-class Multi-label Classification

Citations: | 2 - 0 self |

### Citations

2418 |
A global geometric framework for nonlinear dimensionality reduction
- Tenenbaum, Silva, et al.
- 2000
(Show Context)
Citation Context ...ones; and accordingly local similarities can be propagated to non-local points through a diffusion process on the graph. This is a mild assumption widely adopted by other manifold learning algorithms =-=[23, 19]-=-. UsingK nearest neighbor (KNN) to measure local affinity, we construct G with associated similarity matrix: W(i, j) = { W (i, j) if xj ∈ KNN(xi) 0 otherwise (3) Then the corresponding KNN matrix beco... |

2381 | Nonlinear dimensionality reduction by locally linear embedding
- Roweis, Saul
- 2000
(Show Context)
Citation Context ...ones; and accordingly local similarities can be propagated to non-local points through a diffusion process on the graph. This is a mild assumption widely adopted by other manifold learning algorithms =-=[23, 19]-=-. UsingK nearest neighbor (KNN) to measure local affinity, we construct G with associated similarity matrix: W(i, j) = { W (i, j) if xj ∈ KNN(xi) 0 otherwise (3) Then the corresponding KNN matrix beco... |

1856 | Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories
- Lazebnik, Schmid, et al.
- 2006
(Show Context)
Citation Context ...re 4. Sample images chosen from Caltech 101. We use two kinds of variants of SIFT feature: SIFT with locality-constrained linear coding (siftLLC) [27] and SIFT with Spatial Pyramid Matching (siftSPM) =-=[15]-=-. The SIFT features are both extracted from 16 × 16 pixel patches on a grid with step size of 8 pixels. The codebooks are obtained by standard K-means clustering with the codebook size 2, 048. The dis... |

1598 | T.: Combining labeled and unlabeled data with co-training
- Blum, Mitchell
- 1998
(Show Context)
Citation Context ...d within the graph. A wide range of applications such as classification, ranking, and retrieval [37] have adopted the label propagation strategy. Another type of semi-supervised learning, co-training =-=[5]-=-, utilizes multiview features to help each other by pulling out unlabeled data to re-train and enhance the classifiers. The above methods are mainly designed to deal with the binary classification pro... |

796 | Distance metric learning with application to clustering with side-information
- Xing, Ng, et al.
- 2003
(Show Context)
Citation Context ...s. Supervised metric learning methods often learn a Mahalanobis distance by encouraging small distances among points of the same label while maintaining large distances for points of different labels =-=[29, 28]-=-. Graph-based semisupervised learning frameworks on the other hand utilize a limited amount of labeled data to explore information on a large volume of unlabeled data. Label propagation (LP) [36] spec... |

750 | Semi-Supervised Learning Literature Survey
- Zhu
- 2005
(Show Context)
Citation Context ...t nodes connected by edges of large similarity tend to have the same label through information propagated within the graph. A wide range of applications such as classification, ranking, and retrieval =-=[37]-=- have adopted the label propagation strategy. Another type of semi-supervised learning, co-training [5], utilizes multiview features to help each other by pulling out unlabeled data to re-train and en... |

677 | Distance metric learning for large margin nearest neighbor classification
- Weinberger, Blitzer, et al.
- 2006
(Show Context)
Citation Context ...s. Supervised metric learning methods often learn a Mahalanobis distance by encouraging small distances among points of the same label while maintaining large distances for points of different labels =-=[29, 28]-=-. Graph-based semisupervised learning frameworks on the other hand utilize a limited amount of labeled data to explore information on a large volume of unlabeled data. Label propagation (LP) [36] spec... |

652 | Learning with local and global consistency
- Zhou, Bousquet, et al.
(Show Context)
Citation Context ...s Learning We compare our DLP with several popular semisupervised learning methods: 1) Label Propagation (LP) ; 2) A variant of LP on KNN structure(LP+KNN) [22]; 3) Local and Global Consistency (LGC) =-=[34]-=-; 5) Transductive SVM (TSVM) 6) LapRLS [3]. Note that for LP and LGC, we use one-vs-the-rest methods to deal with multi-class problems; for TSVM and LapRLS, they have their own multi-class extensions.... |

553 | Reducing Multiclass to Binary: A Unifying Approach for Margin Classifiers
- Allwein, Schapire, et al.
- 2000
(Show Context)
Citation Context ...strategy toward multi-class/multi-label learning is to divide it into a set of binary classification problems, using techniques such as one-versus-the-rest, one-versus-one, and error-correcting coding=-=[1]-=-. These methods however have certain limitations including: (1) the difficulty to scale up to large data sets, and (2) inability to exploit the coherence and relations among classes due to the use of ... |

429 | Locality-constrained linear coding for image classification
- Wang, Yang, et al.
- 2010
(Show Context)
Citation Context ...0 pixels. Fig.4 shows some samples of the subset. Figure 4. Sample images chosen from Caltech 101. We use two kinds of variants of SIFT feature: SIFT with locality-constrained linear coding (siftLLC) =-=[27]-=- and SIFT with Spatial Pyramid Matching (siftSPM) [15]. The SIFT features are both extracted from 16 × 16 pixel patches on a grid with step size of 8 pixels. The codebooks are obtained by standard K-m... |

357 | One-shot learning of object categories
- Fei-Fei, Fergus, et al.
- 2006
(Show Context)
Citation Context ...he LP algorithm. Also, as to TSVM and LapRLS, they have much heavier computational burdens (O(n3)) than that of DLP. 5.1.3 Caltech 101 We also test our algorithm on the well-known Caltech-101 dataset =-=[9]-=- which consists of 101 classes and a collection of background images. We selected 12 classes (including animals, faces, buildings, etc.) from Caltech-101, which 1http://yann.lecun.com/exdb/mnist/ Tabl... |

154 | Diffusion maps and coarse-graining: A unified framework for dimensionality reduction, graph partitioning and data set parameterization
- Lafon, Lee
- 2006
(Show Context)
Citation Context ...ed transition matrix by fusing information of both data features and data labels in each iteration. Given the kernel Pt, where t denotes the number of iterations, we can define the diffusion distance =-=[14]-=- at time t as: Dt(i, j) =‖ Pt(i, :)− Pt(j, :) ‖ . (5) The diffusion process maps the data space into an ndimensional space Rnt in which each data point is represented by its transition probability to ... |

107 | Semi-Supervised Learning with Graphs
- Zhu
- 2005
(Show Context)
Citation Context ... to obtain a single fixed distance metric for points in the entire data space. Moreover, nice properties enjoyed by graph-based (built on the distance metric) two-class semi-supervised classification =-=[36]-=- become less obvious in the multi-class classification situations [11], due to the correlations of the multiple labels. Supervised metric learning methods often learn a Mahalanobis distance by encoura... |

95 | On manifold regularization
- Belkin, Niyogi, et al.
- 2005
(Show Context)
Citation Context ...popular semisupervised learning methods: 1) Label Propagation (LP) ; 2) A variant of LP on KNN structure(LP+KNN) [22]; 3) Local and Global Consistency (LGC) [34]; 5) Transductive SVM (TSVM) 6) LapRLS =-=[3]-=-. Note that for LP and LGC, we use one-vs-the-rest methods to deal with multi-class problems; for TSVM and LapRLS, they have their own multi-class extensions. 5.1.1 Benchmarks We test our method on th... |

68 | Unsupervised and semi-supervised multi-class support vector machines
- Xu, Schuurmans
- 2005
(Show Context)
Citation Context ...i-class learning. The existing algorithms can be roughly classified into three categories. 1) Density-based: a recent notable advance in density-based method is a multi-class extension to the TSVM by =-=[30]-=-; however, its high computational cost limits it from being widely adopted. 2) Boosting-based: there are a variety of semi-supervised multi-class extensions to the boosting methods [24, 20]; these met... |

66 | Constraint classification for multiclass classification and ranking
- Har-Peled, Roth, et al.
- 2003
(Show Context)
Citation Context ...a space. Moreover, nice properties enjoyed by graph-based (built on the distance metric) two-class semi-supervised classification [36] become less obvious in the multi-class classification situations =-=[11]-=-, due to the correlations of the multiple labels. Supervised metric learning methods often learn a Mahalanobis distance by encouraging small distances among points of the same label while maintaining ... |

57 | Correlated label propagation with application to multi-label learning
- Kang, Jin, et al.
- 2006
(Show Context)
Citation Context ...ntion in multi-label learning. A maximum entropy method is employed to model the correlations among categories in [35]. [18] studies a hierarchical structure to handle the correlation information. In =-=[12]-=-, a correlated label propagation framework is developed for multi-label learning that explicitly fuses the information of different classes. However, these methods are only for supervised learning, an... |

52 | Semi-supervised multilabel learning by constrained non-negative matrix factorization
- Liu, Jin, et al.
- 2006
(Show Context)
Citation Context ...at explicitly fuses the information of different classes. However, these methods are only for supervised learning, and how to make use of label correlation among unlabeled instances is still unclear. =-=[16]-=- uses constrained non-negative matrix factorization to propagate the label information by enforcing the examples with similar input patterns to share similar sets of class labels. Another semi-supervi... |

51 | Multi-labelled classification using maximum entropy method, in
- Zhu, Ji, et al.
- 2005
(Show Context)
Citation Context ...ss/multi-label learning is to use a one vs. all strategy. The disadvantage of one vs. all approaches is, however, that the correlations among different classes are not fully utilized. As discussed in =-=[35]-=-, taking the class correlations into account often leads to a significant performance improvement. In this paper, we propose a new method, dynamic label propagation (DLP), to simultaneously deal with ... |

50 | Multi-label informed latent semantic indexing
- Yu, Yu, et al.
(Show Context)
Citation Context ...mes that two instances tend to have large overlap in their assigned labels if they share high similarity in their input patterns. The third one is Multi-label Informed Latent Semantic Indexing (MISL) =-=[32]-=-, which maps the input features into a new feature space which captures the structure of both input data and label dependency, and then uses SVM on the projected space. The fourth one is the a recent ... |

43 | Semi-supervised multi-label learning by solving a sylvester equation
- Chen, Song, et al.
- 2008
(Show Context)
Citation Context ...ix factorization to propagate the label information by enforcing the examples with similar input patterns to share similar sets of class labels. Another semi-supervised multi-label learning technique =-=[7]-=- develops a regularization with two energy terms about smoothness of input instances and label information by solving a Sylvester Equation. A similar algorithm [33] solves the multi-label problem with... |

30 | The rendezvous algorithm: multiclass semi-supervised learning with markov random walks
- Azran
- 2007
(Show Context)
Citation Context ...res (especially for the unlabeled data), which, to some extent, jeopardizing the classification accuracy. 3) Graph-based: some recent advances adopt Gaussian Processes [21, 17] or Markov Random Walks =-=[2]-=-. Transduction by Laplacian graph [4, 10] is also shown to be able to solve multi-class semi-supervised problems; although these algorithms make use of the relationship between unlabeled and labeled d... |

30 | Efficient Graph-Based Semi-Supervised Learning of Structured Tagging Models
- Subramanya, Petrov, et al.
- 2010
(Show Context)
Citation Context ... Experiments 5.1. Semi-supervised Multi-class Learning We compare our DLP with several popular semisupervised learning methods: 1) Label Propagation (LP) ; 2) A variant of LP on KNN structure(LP+KNN) =-=[22]-=-; 3) Local and Global Consistency (LGC) [34]; 5) Transductive SVM (TSVM) 6) LapRLS [3]. Note that for LP and LGC, we use one-vs-the-rest methods to deal with multi-class problems; for TSVM and LapRLS,... |

26 |
Graph-based semi-supervised learning with multiple labels
- Zha, Mei, et al.
- 2009
(Show Context)
Citation Context ...vised multi-label learning technique [7] develops a regularization with two energy terms about smoothness of input instances and label information by solving a Sylvester Equation. A similar algorithm =-=[33]-=- solves the multi-label problem with an optimization framework with an regularization of Laplacian matrix. Different from these semi-supervised multi-label methods, the proposed method explicitly merg... |

21 |
Improving Shape Retrieval by Learning Graph Transduction,” ECCV
- Yang, Bai, et al.
- 2008
(Show Context)
Citation Context ... ), (1) for some function h with exponential decay at infinity. A common choice is h(x) = exp(−x). Note that µ and σ are hyper-parameters. σ is learned by the mean distance to K-nearest neighborhoods =-=[31]-=-. A natural transition matrix on V can be defined by normalizing the weight matrix as: P (i, j) = W (i, j)∑ k∈V W (i, k) , (2) so that ∑ j∈V P (i, j) = 1. Note that P is asymmetric after the normaliza... |

20 | Transduction with Matrix Completion: Three Birds with One
- Goldberg, Zhu, et al.
- 2010
(Show Context)
Citation Context ...a), which, to some extent, jeopardizing the classification accuracy. 3) Graph-based: some recent advances adopt Gaussian Processes [21, 17] or Markov Random Walks [2]. Transduction by Laplacian graph =-=[4, 10]-=- is also shown to be able to solve multi-class semi-supervised problems; although these algorithms make use of the relationship between unlabeled and labeled data, their computational complexity is de... |

13 | Semi-supervised Laplacian regularization of kernel canonical correlation analysis
- Blaschko, Lampert, et al.
- 2008
(Show Context)
Citation Context ...a), which, to some extent, jeopardizing the classification accuracy. 3) Graph-based: some recent advances adopt Gaussian Processes [21, 17] or Markov Random Walks [2]. Transduction by Laplacian graph =-=[4, 10]-=- is also shown to be able to solve multi-class semi-supervised problems; although these algorithms make use of the relationship between unlabeled and labeled data, their computational complexity is de... |

13 | Multi-class semisupervised learning with the e-truncated multinomial probit Gaussian process
- Rogers, Girolami
- 2007
(Show Context)
Citation Context ...n between labels and input features (especially for the unlabeled data), which, to some extent, jeopardizing the classification accuracy. 3) Graph-based: some recent advances adopt Gaussian Processes =-=[21, 17]-=- or Markov Random Walks [2]. Transduction by Laplacian graph [4, 10] is also shown to be able to solve multi-class semi-supervised problems; although these algorithms make use of the relationship betw... |

12 | Semi-supervised boosting for multi-class classification
- Valizadegan, Jin, et al.
- 2008
(Show Context)
Citation Context ...to the TSVM by [30]; however, its high computational cost limits it from being widely adopted. 2) Boosting-based: there are a variety of semi-supervised multi-class extensions to the boosting methods =-=[24, 20]-=-; these methods differ in the loss functions and regularization techniques; the disadvantage of them is the lack of ability to utilize the correlation between labels and input features (especially for... |

11 | Transductive multilabel learning via label set propagation
- Kong, Ng, et al.
(Show Context)
Citation Context ...doing projection on the fused manifolds, DLP further takes advantage of the correlations among labeling information of unlabeled data. Our work also differs significantly from a very recent algorithm =-=[13]-=-, which emphasizes the learning of fusion parameters for unlabeled data; the focus here is however the dynamic update of the similarity functions from both data and label information. In addition, our... |

10 | Regularized Multi-Class SemiSupervised Boosting
- Saffari, Leistner, et al.
- 2009
(Show Context)
Citation Context ...to the TSVM by [30]; however, its high computational cost limits it from being widely adopted. 2) Boosting-based: there are a variety of semi-supervised multi-class extensions to the boosting methods =-=[24, 20]-=-; these methods differ in the loss functions and regularization techniques; the disadvantage of them is the lack of ability to utilize the correlation between labels and input features (especially for... |

6 | On maximum margin hierarchical multi-label classification
- Rousu, Saunders, et al.
- 2004
(Show Context)
Citation Context ...tions among data categories. Recently, category correlations are given more attention in multi-label learning. A maximum entropy method is employed to model the correlations among categories in [35]. =-=[18]-=- studies a hierarchical structure to handle the correlation information. In [12], a correlated label propagation framework is developed for multi-label learning that explicitly fuses the information o... |

5 | Unsupervised metric fusion by cross diffusion
- Wang, Jiang, et al.
- 2012
(Show Context)
Citation Context ...Eqn. (6) would result in a degeneration at the first round when the learned label information of unlabeled data is not accurate enough to infer the similarities in the input space. Hence, inspired by =-=[25]-=-, we need to re-emphasize the intrinsic structure between all the input data by the KNN matrix. From (13), we can see that, the diffusion process propagates the similarities through the KNN matrix. In... |

4 |
Affinity learning via self-diffusion for image segmentation and clustering
- Wang, Tu
- 2012
(Show Context)
Citation Context ...challenging dataset since it contains multiple classes and only one in each class is labeled. We test the effect of the two steps in the dynamic label propagation. We construct the KNN matrix same in =-=[26]-=-. First, we omit the first step that fuses the label correlation with the kernel matrix. The other steps are all the same. The result is shown in Fig.(3).(B). Second, we do the first step to fuse labe... |

2 | Graph based multi-class semi-supervised leanring using gaussian process
- Song, Zhang, et al.
- 2006
(Show Context)
Citation Context ...n between labels and input features (especially for the unlabeled data), which, to some extent, jeopardizing the classification accuracy. 3) Graph-based: some recent advances adopt Gaussian Processes =-=[21, 17]-=- or Markov Random Walks [2]. Transduction by Laplacian graph [4, 10] is also shown to be able to solve multi-class semi-supervised problems; although these algorithms make use of the relationship betw... |

1 |
A new family of online algirithms for category ranking
- Crammer, Singer
- 2002
(Show Context)
Citation Context ...r, there are much fewer attempts to tackle semisupervised multi-label problem, despite there being a rich body of literature about supervised multi-label learning. One popular method is label ranking =-=[8]-=-, which learns a ranking function of category labels from the labeled instances and classifying each unlabeled instance by thresholding the scores of the learned ranking functions. Although being easy... |