Results 11  20
of
62
Supplement for the paper titled “Coregularization Based Semisupervised Domain Adaptation”
"... Let h ∗ s and h ∗ t be the optimal source and target hypotheses in Hs and Ht respectively. Using triangle inequality for the loss function, we have ǫt(ht, ft) ≤ ǫt(ht, h ∗ t) + ǫt(h ∗ t,ft). We use the notion of dH∆Hdistance in the next step, which is defined as suph1,h2∈H 2ǫs(h1, h2) − ǫt(h1, h ..."
Abstract
 Add to MetaCart
reduction to the term Es x) are similar to [2].) 1 2 Proof of Theorem 4.4: Complexity for EA In this section, we bound the complexity of target hypothesis class J t EA for EA. The base hypothesis class H in Eq. 4.3 (of the original paper) is symmetric in source and target hypotheses. So the complexity
Does Unlabeled Data Provably Help? Worstcase Analysis of the Sample Complexity of SemiSupervised Learning
"... We study the potential benefits of unlabeled data to classification prediction to the learner. We compare learning in the semisupervised model to the standard, supervised PAC (distribution free) model, considering both the realizable and the unrealizable (agnostic) settings. Roughly speaking, our ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
We study the potential benefits of unlabeled data to classification prediction to the learner. We compare learning in the semisupervised model to the standard, supervised PAC (distribution free) model, considering both the realizable and the unrealizable (agnostic) settings. Roughly speaking, our
Sparse semisupervised hyperspectral unmixing using a novel iterative bayesian inference algorithm
 in: 19th European Signal Processing Conference (EUSIPCO), 2011
"... In this paper a novel hierarchical Bayesian model for sparse semisupervised hyperspectral unmixing is presented. Adopting the sparsity hypothesis and taking into account the convex constraints of the estimation problem, suitable priors are selected for the model parameters. Then, a new lowcomplex ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper a novel hierarchical Bayesian model for sparse semisupervised hyperspectral unmixing is presented. Adopting the sparsity hypothesis and taking into account the convex constraints of the estimation problem, suitable priors are selected for the model parameters. Then, a new low
SemiSupervised Learning of Acoustic Driven Prosodic Phrase Breaks for TexttoSpeech Systems
"... In this paper, we propose a semisupervised learning of acoustic driven phrase breaks and its usefulness for texttospeech systems. In this work, we derive a set of initial hypothesis of phrase breaks in a speech signal using pause as an acoustic cue. As these initial estimates are obtained based o ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper, we propose a semisupervised learning of acoustic driven phrase breaks and its usefulness for texttospeech systems. In this work, we derive a set of initial hypothesis of phrase breaks in a speech signal using pause as an acoustic cue. As these initial estimates are obtained based
SEGSSC: A Framework Based on Synthetic Examples Generation for SelfLabeled SemiSupervised Classification
"... Abstract—Selflabeled techniques are semisupervised classification methods that address the shortage of labeled examples via a selflearning process based on supervised models. They progressively classify unlabeled data and use them to modify the hypothesis learned from labeled samples. Most relev ..."
Abstract
 Add to MetaCart
Abstract—Selflabeled techniques are semisupervised classification methods that address the shortage of labeled examples via a selflearning process based on supervised models. They progressively classify unlabeled data and use them to modify the hypothesis learned from labeled samples. Most
Appendix to SemiSupervised Learning on SingleView Datasets by Integration of Multiple CoTrained Classifiers (Supplementary for the section IV.B on Experimental Results)
"... In order to test whether the reported differences in accuracy are statistically significant, we perform a twostep comparison of the considered methods, recommended by Demsar (2006) and Garcia et al. (2008). The first step was to apply a Friedman test that rejects the null hypothesis, which states t ..."
Abstract
 Add to MetaCart
In order to test whether the reported differences in accuracy are statistically significant, we perform a twostep comparison of the considered methods, recommended by Demsar (2006) and Garcia et al. (2008). The first step was to apply a Friedman test that rejects the null hypothesis, which states
Author manuscript, published in "International Workshop on Machine Learning in Systems Biology, Vienne: Austria (2011)" A New Theoretical Angle to Semisupervised Output Kernel Regression for Proteinprotein Interaction Network Inference
, 2013
"... Recent years have witnessed a surge of interest for network inference in biological networks. In silico prediction of proteinprotein interaction (PPI) networks is motivated by the cost and the difficulty to experimentally detect physical interactions between proteins. The underlying hypothesis is t ..."
Abstract
 Add to MetaCart
interact, from a dataset of labeled pairs of proteins [1–5], and matrix completion approaches that fits into an unsupervised setting with some constraints [6, 7] or directly into a semisupervised framework [8, 9]. Let us define O the set of descriptions of the proteins we are interested in. In this paper
Statistical Hypothesis Testing in Positive Unlabelled Data
"... Abstract. We propose a set of novel methodologies which enable valid statistical hypothesis testing when we have only positive and unlabelled (PU) examples. This type of problem, a special case of semisupervised data, is common in text mining, bioinformatics, and computer vision. Focusing on a gen ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Abstract. We propose a set of novel methodologies which enable valid statistical hypothesis testing when we have only positive and unlabelled (PU) examples. This type of problem, a special case of semisupervised data, is common in text mining, bioinformatics, and computer vision. Focusing on a
Exploiting Unlabeled Data in Ensemble Methods
"... An adaptive semisupervised ensemble method, ASSEMBLE, is proposed that constructs classification ensembles based on both labeled and unlabeled data. ASSEMBLE alternates between assigning "pseudoclasses" to the unlabeled data using the existing ensemble and constructing the next base clas ..."
Abstract

Cited by 64 (0 self)
 Add to MetaCart
classifier using both the labeled and pseudolabeled data. Mathematically, this intuitive algorithm corresponds to maximizing the classification margin in hypothesis space as measured on both the labeled and unlabeled data. Unlike alternative approaches, ASSEMBLE does not require a semisupervised learning
On Causal and Anticausal Learning
"... We consider the problem of function estimation in the case where an underlying causal model can be inferred. This has implications for popular scenarios such as covariate shift, concept drift, transfer learning and semisupervised learning. We argue that causal knowledge may facilitate some approach ..."
Abstract
 Add to MetaCart
approaches for a given problem, and rule out others. In particular, we formulate a hypothesis for when semisupervised learning can help, and corroborate it with empirical results. 1.
Results 11  20
of
62