Results 1  10
of
121
SemiSupervised Learning Literature Survey
, 2006
"... We review the literature on semisupervised learning, which is an area in machine learning and more generally, artificial intelligence. There has been a whole
spectrum of interesting ideas on how to learn from both labeled and unlabeled data, i.e. semisupervised learning. This document is a chapter ..."
Abstract

Cited by 757 (8 self)
 Add to MetaCart
(Show Context)
We review the literature on semisupervised learning, which is an area in machine learning and more generally, artificial intelligence. There has been a whole
spectrum of interesting ideas on how to learn from both labeled and unlabeled data, i.e. semisupervised learning. This document is a chapter excerpt from the author’s
doctoral thesis (Zhu, 2005). However the author plans to update the online version frequently to incorporate the latest development in the field. Please obtain the latest
version at http://www.cs.wisc.edu/~jerryzhu/pub/ssl_survey.pdf
Random walks for image segmentation
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2006
"... Abstract—A novel method is proposed for performing multilabel, interactive image segmentation. Given a small number of pixels with userdefined (or predefined) labels, one can analytically and quickly determine the probability that a random walker starting at each unlabeled pixel will first reach on ..."
Abstract

Cited by 385 (21 self)
 Add to MetaCart
(Show Context)
Abstract—A novel method is proposed for performing multilabel, interactive image segmentation. Given a small number of pixels with userdefined (or predefined) labels, one can analytically and quickly determine the probability that a random walker starting at each unlabeled pixel will first reach one of the prelabeled pixels. By assigning each pixel to the label for which the greatest probability is calculated, a highquality image segmentation may be obtained. Theoretical properties of this algorithm are developed along with the corresponding connections to discrete potential theory and electrical circuits. This algorithm is formulated in discrete space (i.e., on a graph) using combinatorial analogues of standard operators and principles from continuous potential theory, allowing it to be applied in arbitrary dimension on arbitrary graphs. Index Terms—Image segmentation, interactive segmentation, graph theory, random walks, combinatorial Dirichlet problem, harmonic functions, Laplace equation, graph cuts, boundary completion. Ç 1
Multipleinstance active learning
 In Advances in Neural Information Processing Systems (NIPS
, 2008
"... We present a framework for active learning in the multipleinstance (MI) setting. In an MI learning problem, instances are naturally organized into bags and it is the bags, instead of individual instances, that are labeled for training. MI learners assume that every instance in a bag labeled negativ ..."
Abstract

Cited by 111 (7 self)
 Add to MetaCart
(Show Context)
We present a framework for active learning in the multipleinstance (MI) setting. In an MI learning problem, instances are naturally organized into bags and it is the bags, instead of individual instances, that are labeled for training. MI learners assume that every instance in a bag labeled negative is actually negative, whereas at least one instance in a bag labeled positive is actually positive. We consider the particular case in which an MI learner is allowed to selectively query unlabeled instances from positive bags. This approach is well motivated in domains in which it is inexpensive to acquire bag labels and possible, but expensive, to acquire instance labels. We describe a method for learning from labels at mixed levels of granularity, and introduce two active query selection strategies motivated by the MI setting. Our experiments show that learning from instance labels can significantly improve performance of a basic MI learning algorithm in two multipleinstance domains: contentbased image retrieval and text classification. 1
Analysis of a greedy active learning strategy
, 2005
"... We abstract out the core search problem of active learning schemes, to better understand the extent to which adaptive labeling can improve sample complexity. We give various upper and lower bounds on the number of labels which need to be queried, and we prove that a popular greedy active learning r ..."
Abstract

Cited by 104 (3 self)
 Add to MetaCart
(Show Context)
We abstract out the core search problem of active learning schemes, to better understand the extent to which adaptive labeling can improve sample complexity. We give various upper and lower bounds on the number of labels which need to be queried, and we prove that a popular greedy active learning rule is approximately as good as any other strategy for minimizing this number of labels.
Active learning with gaussian processes for object categorization
 In ICCV
, 2007
"... Discriminative methods for visual object category recognition are typically nonprobabilistic, predicting class labels but not directly providing an estimate of uncertainty. Gaussian Processes (GPs) are powerful regression techniques with explicit uncertainty models; we show here how Gaussian Proces ..."
Abstract

Cited by 95 (14 self)
 Add to MetaCart
(Show Context)
Discriminative methods for visual object category recognition are typically nonprobabilistic, predicting class labels but not directly providing an estimate of uncertainty. Gaussian Processes (GPs) are powerful regression techniques with explicit uncertainty models; we show here how Gaussian Processes with covariance functions defined based on a Pyramid Match Kernel (PMK) can be used for probabilistic object category recognition. The uncertainty model provided by GPs offers confidence estimates at test points, and naturally allows for an active learning paradigm in which points are optimally selected for interactive labeling. We derive a novel active category learning method based on our probabilistic regression model, and show that a significant boost in classification performance is possible, especially when the amount of training data for a category is ultimately very small. 1.
Selective Supervision: Guiding Supervised Learning with DecisionTheoretic Active
 Learning. International Joint Conference on Artificial Intelligence
, 2007
"... An inescapable bottleneck with learning from large data sets is the high cost of labeling training data. Unsupervised learning methods have promised to lower the cost of tagging by leveraging notions of similarity among data points to assign tags. However, unsupervised and semisupervised learning t ..."
Abstract

Cited by 56 (7 self)
 Add to MetaCart
An inescapable bottleneck with learning from large data sets is the high cost of labeling training data. Unsupervised learning methods have promised to lower the cost of tagging by leveraging notions of similarity among data points to assign tags. However, unsupervised and semisupervised learning techniques often provide poor results due to errors in estimation. We look at methods that guide the allocation of human effort for labeling data so as to get the greatest boosts in discriminatory power with increasing amounts of work. We focus on the application of value of information to Gaussian Process classifiers and explore the effectiveness of the method on the task of classifying voice messages. 1
On semisupervised classification
 In
, 2005
"... A graphbased prior is proposed for parametric semisupervised classification. The prior utilizes both labelled and unlabelled data; it also integrates features from multiple views of a given sample (e.g., multiple sensors), thus implementing a Bayesian form of cotraining. An EM algorithm for train ..."
Abstract

Cited by 48 (10 self)
 Add to MetaCart
A graphbased prior is proposed for parametric semisupervised classification. The prior utilizes both labelled and unlabelled data; it also integrates features from multiple views of a given sample (e.g., multiple sensors), thus implementing a Bayesian form of cotraining. An EM algorithm for training the classifier automatically adjusts the tradeoff between the contributions of: (a) the labelled data; (b) the unlabelled data; and (c) the cotraining information. Active label query selection is performed using a mutual information based criterion that explicitly uses the unlabelled data and the cotraining information. Encouraging results are presented on public benchmarks and on measured data from single and multiple sensors. 1
MultiLabel Image Segmentation for Medical Applications Based on GraphTheoretic Electrical Potentials
 ECCV
, 2004
"... Abstract. A novel method is proposed for performing multilabel, semiautomated image segmentation. Given a small number of pixels with userdefined labels, one can analytically (and quickly) determine the probability that a random walker starting at each unlabeled pixel will first reach one of the ..."
Abstract

Cited by 46 (10 self)
 Add to MetaCart
(Show Context)
Abstract. A novel method is proposed for performing multilabel, semiautomated image segmentation. Given a small number of pixels with userdefined labels, one can analytically (and quickly) determine the probability that a random walker starting at each unlabeled pixel will first reach one of the prelabeled pixels. By assigning each pixel to the label for which the greatest probability is calculated, a highquality image segmentation may be obtained. Theoretical properties of this algorithm are developed along with the corresponding connections to discrete potential theory and electrical circuits. This algorithm is formulated in discrete space (i.e., on a graph) using combinatorial analogues of standard operators and principles from continuous potential theory, allowing it to be applied in arbitrary dimension. 1
Discriminative batch mode active learning
, 2007
"... Active learning sequentially selects unlabeled instances to label with the goal of reducing the effort needed to learn a good classifier. Most previous studies in active learning have focused on selecting one unlabeled instance to label at a time while retraining in each iteration. Recently a few ba ..."
Abstract

Cited by 44 (3 self)
 Add to MetaCart
(Show Context)
Active learning sequentially selects unlabeled instances to label with the goal of reducing the effort needed to learn a good classifier. Most previous studies in active learning have focused on selecting one unlabeled instance to label at a time while retraining in each iteration. Recently a few batch mode active learning approaches have been proposed that select a set of most informative unlabeled instances in each iteration under the guidance of heuristic scores. In this paper, we propose a discriminative batch mode active learning approach that formulates the instance selection task as a continuous optimization problem over auxiliary instance selection variables. The optimization is formulated to maximize the discriminative classification performance of the target classifier, while also taking the unlabeled data into account. Although the objective is not convex, we can manipulate a quasiNewton method to obtain a good local solution. Our empirical studies on UCI datasets show that the proposed active learning is more effective than current stateofthe art batch mode active learning algorithms. 1