Results 1  10
of
28
Towards weakly supervised semantic segmentation by means of multiple instance and multitask learning
 In CVPR
, 2010
"... We address the task of learning a semantic segmentation from weakly supervised data. Our aim is to devise a system that predicts an object label for each pixel by making use of only image level labels during training – the information whether a certain object is present or not in the image. Such c ..."
Abstract

Cited by 32 (0 self)
 Add to MetaCart
(Show Context)
We address the task of learning a semantic segmentation from weakly supervised data. Our aim is to devise a system that predicts an object label for each pixel by making use of only image level labels during training – the information whether a certain object is present or not in the image. Such coarse tagging of images is faster and easier to obtain as opposed to the tedious task of pixelwise labeling required in state of the art systems. We cast this task naturally as a multiple instance learning (MIL) problem. We use Semantic Texton Forest (STF) as the basic framework and extend it for the MIL setting. We make use of multitask learning (MTL) to regularize our solution. Here, an external task of geometric context estimation is used to improve on the task of semantic segmentation. We report experimental results on the MSRC21 and the very challenging VOC2007 datasets. On MSRC21 dataset we are able, by using 276 weakly labeled images, to achieve the performance of a supervised STF trained on pixelwise labeled training set of 56 images, which is a significant reduction in supervision needed. 1.
Multipleinstance learning with randomized trees
 European Conference on Computer Vision
, 2010
"... Abstract. Multipleinstance learning (MIL) allows for training classifiers from ambiguously labeled data. In computer vision, this learning paradigm has been recently used in many applications such as object classification, detection and tracking. This paper presents a novel multipleinstance learnin ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Multipleinstance learning (MIL) allows for training classifiers from ambiguously labeled data. In computer vision, this learning paradigm has been recently used in many applications such as object classification, detection and tracking. This paper presents a novel multipleinstance learning algorithm for randomized trees called MIForests. Randomized trees are fast, inherently parallel and multiclass and are thus increasingly popular in computer vision. MIForest combine the advantages of these classifiers with the flexibility of multiple instance learning. In order to leverage the randomized trees for MIL, we define the hidden class labels inside target bags as random variables. These random variables are optimized by training random forests and using a fast iterative homotopy method for solving the nonconvex optimization problem. Additionally, most previously proposed MIL approaches operate in batch or offline mode and thus assume access to the entire training set. This limits their applicability in scenarios where the data arrives sequentially and in dynamic environments. We show that MIForests are not limited to offline problems and present an online extension of our approach. In the experiments, we evaluate MIForests on standard visual MIL benchmark datasets where we achieve stateoftheart results while being faster than previous approaches and being able to inherently solve multiclass problems. The online version of MIForests is evaluated on visual object tracking where we outperform the stateoftheart method based on boosting. 1
Gaussian Processes Multiple Instance Learning
"... This paper proposes a multiple instance learning (MIL) algorithm for Gaussian processes (GP). The GPMIL model inherits two crucial benefits from GP: (i) a principle manner of learning kernel parameters, and (ii) a probabilistic interpretation (e.g., variance in prediction) that is informative for b ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
This paper proposes a multiple instance learning (MIL) algorithm for Gaussian processes (GP). The GPMIL model inherits two crucial benefits from GP: (i) a principle manner of learning kernel parameters, and (ii) a probabilistic interpretation (e.g., variance in prediction) that is informative for better understanding of the MIL prediction problem. The bag labeling protocol of the MIL problem, namely the existence of a positive instance in a bag, can be effectively represented by a sigmoid likelihood model through the max function over GP latent variables. To circumvent the intractability of exact GP inference and learning incurred by the noncontinuous max function, we suggest two approximations: first, the softmax approximation; second, the use of witness indicator variables optimized with a deterministic annealing schedule. The effectiveness of GPMIL against other stateoftheart MIL approaches is demonstrated on several benchmark MIL datasets. 1.
A convex relaxation for weakly supervised classifiers
"... This paper introduces a general multiclass approach to weakly supervised classification. Inferring the labels and learning the parameters of the model is usually done jointly through a blockcoordinate descent algorithm such as expectationmaximization (EM), which may lead to local minima. To avoid ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
This paper introduces a general multiclass approach to weakly supervised classification. Inferring the labels and learning the parameters of the model is usually done jointly through a blockcoordinate descent algorithm such as expectationmaximization (EM), which may lead to local minima. To avoid this problem, we propose a cost function based on a convex relaxation of the softmax loss. We then propose an algorithm specifically designed to efficiently solve the corresponding semidefinite program (SDP). Empirically, our method compares favorably to standard ones on different datasets for multiple instance learning and semisupervised learning, as well as on clustering tasks. 1.
TextBased Image Retrieval using Progressive MultiInstance Learning
"... Relevant and irrelevant images collected from the Web (e.g., Flickr.com) have been employed as loosely labeled training data for image categorization and retrieval. In this work, we propose a new approach to learn a robust classifier for textbased image retrieval (TBIR) using relevant and irrelevan ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
(Show Context)
Relevant and irrelevant images collected from the Web (e.g., Flickr.com) have been employed as loosely labeled training data for image categorization and retrieval. In this work, we propose a new approach to learn a robust classifier for textbased image retrieval (TBIR) using relevant and irrelevant training web images, in which we explicitly handle noise in the loose labels of training images. Specifically, we first partition the relevant and irrelevant training web images into clusters. By treating each cluster as a “bag ” and the images in each bag as “instances”, we formulate this task as a multiinstance learning problem with constrained positive bags, in which each positive bag contains at least a portion of positive instances. We present a new algorithm called MILCPB to effectively exploit such constraints on positive bags and predict the labels of test instances (images). Observing that the constraints on positive bags may not always be satisfied in our application, we additionally propose a progressive scheme (referred to as Progressive MILCPB, or PMILCPB) to further improve the retrieval performance, in which we iteratively partition the topranked training web images from the current MILCPB classifier to construct more confident positive “bags” and then add these new “bags ” as training data to learn the subsequent MILCPB classifiers. Comprehensive experiments on two challenging realworld web image data sets demonstrate the effectiveness of our approach. 1.
MILD: MultipleInstance Learning via Disambiguation
, 2009
"... In multipleinstance learning (MIL), an individual example is called an instance and a bag contains a single or multiple instances. The class labels available in the training set are associated with bags rather than instances. A bag is labeled positive if at least one of its instances is positive; o ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
In multipleinstance learning (MIL), an individual example is called an instance and a bag contains a single or multiple instances. The class labels available in the training set are associated with bags rather than instances. A bag is labeled positive if at least one of its instances is positive; otherwise, the bag is labeled negative. Since a positive bag may contain some negative instances in addition to one or more positive instances, the true labels for the instances in a positive bag may or may not be the same as the corresponding bag label and, consequently, the instance labels are inherently ambiguous. In this paper, we propose a very efficient and robust MIL method, called MILD (MultipleInstance Learning via Disambiguation), for general MIL problems. First, we propose a novel disambiguation method to identify the true positive instances in the positive bags. Second, we propose two feature representation schemes, one for instancelevel classification and the other for baglevel classification, to convert the MIL problem into a standard singleinstance learning (SIL) problem that can be solved by wellknown SIL algorithms, such as support vector machine. Third, an inductive semisupervised learning method is proposed for MIL. We evaluate our methods extensively on several challenging MIL applications to demonstrate their promising efficiency, robustness and accuracy.
M³IC: Maximum Margin Multiple Instance Clustering
"... Clustering, classification, and regression, are three major research topics in machine learning. So far, much work has been conducted in solving multiple instance classification and multiple instance regression problems, where supervised training patterns are given as bags and each bag consists of s ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
Clustering, classification, and regression, are three major research topics in machine learning. So far, much work has been conducted in solving multiple instance classification and multiple instance regression problems, where supervised training patterns are given as bags and each bag consists of some instances. But the research on unsupervised multiple instance clustering is still limited. This paper formulates a novel Maximum Margin Multiple Instance Clustering (M 3 IC) problem for the multiple instance clustering task. To avoid solving a nonconvex optimization problem directly, M 3 IC is further relaxed, which enables an efficient optimization solution with a combination of Constrained ConcaveConvex Procedure (CCCP) and the Cutting Plane method. Furthermore, this paper analyzes some important properties of the proposed method and the relationship between the proposed method and some other related ones. An extensive set of empirical results demonstrate the advantages of the proposed method against existing research for both effectiveness and efficiency.
Convex multipleinstance learning by estimating likelihood ratio
 In NIPS
, 2010
"... We propose an approach to multipleinstance learning that reformulates the problem as a convex optimization on the likelihood ratio between the positive and the negative class for each training instance. This is casted as joint estimation of both a likelihood ratio predictor and the target (likeliho ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
We propose an approach to multipleinstance learning that reformulates the problem as a convex optimization on the likelihood ratio between the positive and the negative class for each training instance. This is casted as joint estimation of both a likelihood ratio predictor and the target (likelihood ratio variable) for instances. Theoretically, we prove a quantitative relationship between the risk estimated under the 01 classification loss, and under a loss function for likelihood ratio. It is shown that likelihood ratio estimation is generally a good surrogate for the 01 loss, and separates positive and negative instances well. The likelihood ratio estimates provide a ranking of instances within a bag and are used as input features to learn a linear classifier on bags of instances. Instancelevel classification is achieved from the baglevel predictions and the individual likelihood ratios. Experiments on synthetic and real datasets demonstrate the competitiveness of the approach. 1
Fast exact inference for recursive cardinality models
 In Uncertainty in Artificial Intelligence
, 2012
"... Cardinality potentials are a generally useful class of high order potential that affect probabilities based on how many of D binary variables are active. Maximum a posteriori (MAP) inference for cardinality potential models is wellunderstood, with efficient computations taking O(D log D) time. Yet ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
Cardinality potentials are a generally useful class of high order potential that affect probabilities based on how many of D binary variables are active. Maximum a posteriori (MAP) inference for cardinality potential models is wellunderstood, with efficient computations taking O(D log D) time. Yet efficient marginalization and sampling have not been addressed as thoroughly in the machine learning community. We show that there exists a simple algorithm for computing marginal probabilities and drawing exact joint samples that runs in O(D log 2 D) time, and we show how to frame the algorithm as efficient belief propagation in a low order treestructured model that includes additional auxiliary variables. We then develop a new, more general class of models, termed Recursive Cardinality models, which take advantage of this efficiency. Finally, we show how to do efficient exact inference in models composed of a tree structure and a cardinality potential. We explore the expressive power of Recursive Cardinality models and empirically demonstrate their utility. 1
Ellipsoidal Multiple Instance Learning
, 2013
"... We propose a large margin method for asymmetric learning with ellipsoids, called eMIL, suited to multiple instance learning (MIL). We derive the distance between ellipsoids and the hyperplane, generalising the standard support vector machine. Negative bags in MIL contain only negative instances, and ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
We propose a large margin method for asymmetric learning with ellipsoids, called eMIL, suited to multiple instance learning (MIL). We derive the distance between ellipsoids and the hyperplane, generalising the standard support vector machine. Negative bags in MIL contain only negative instances, and we treat them akin to uncertain observations in the robust optimisation framework. However, our method allows positive bags to cross the margin, since it is not known which instances within are positive. We show that representing bags as ellipsoids under the introduced distance is the most robust solution when treating a bag as a random variable with finite mean and covariance. Two algorithms are derived to solve the resulting nonconvex optimization problem: a concaveconvex procedure and a quasiNewton method. Our method achieves competitive results on benchmark datasets. We introduce a MIL dataset from a real world application of detecting wheel defects from multiple partial observations, and show that eMIL outperforms competing approaches.