Results 1 
5 of
5
SemiSupervised Learning via Generalized Maximum Entropy
"... Various supervised inference methods can be analyzed as convex duals of the generalized maximum entropy (MaxEnt) framework. Generalized MaxEnt aims to find a distribution that maximizes an entropy function while respecting prior information represented as potential functions in miscellaneous forms o ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
Various supervised inference methods can be analyzed as convex duals of the generalized maximum entropy (MaxEnt) framework. Generalized MaxEnt aims to find a distribution that maximizes an entropy function while respecting prior information represented as potential functions in miscellaneous forms of constraints and/or penalties. We extend this framework to semisupervised learning by incorporating unlabeled data via modifications to these potential functions reflecting structural assumptions on the data geometry. The proposed approach leads to a family of discriminative semisupervised algorithms, that are convex, scalable, inherently multiclass, easy to implement, and that can be kernelized naturally. Experimental evaluation of special cases shows the competitiveness of our methodology. 1
ASC: Automatically scalable computation
 In Proceedings of the 19th International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS ’14
, 2014
"... We present an architecture designed to transparently and automatically scale the performance of sequential programs as a function of the hardware resources available. The architecture is predicated on a model of computation that views program execution as a walk through the enormous state space com ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
We present an architecture designed to transparently and automatically scale the performance of sequential programs as a function of the hardware resources available. The architecture is predicated on a model of computation that views program execution as a walk through the enormous state space composed of the memory and registers of a singlethreaded processor. Each instruction execution in this model moves the system from its current point in state space to a deterministic subsequent point. We can parallelize such execution by predictively partitioning the complete path and speculatively executing each partition in parallel. Accurately partitioning the path is a challenging prediction problem. We have implemented our system using a functional simulator that emulates the x86 instruction set, including a collection of state predictors and a mechanism for speculatively executing threads that explore potential states along the execution path. While the overhead of our simulation makes it impractical to measure speedup relative to native x86 execution, experiments on three benchmarks show scalability of up to a factor of 256 on a 1024 core machine when executing unmodified sequential programs.
MultiView Budgeted Learning under Label and Feature Constraints Using LabelGuided GraphBased Regularization
"... Budgeted learning under constraints on both the amount of labeled information and the availability of features at test time pertains to a large number of real world problems. Ideas from multiview learning, semisupervised learning, and even active learning have applicability, but a common framework ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Budgeted learning under constraints on both the amount of labeled information and the availability of features at test time pertains to a large number of real world problems. Ideas from multiview learning, semisupervised learning, and even active learning have applicability, but a common framework whose assumptions fit these problem spaces is nontrivial to construct. We leverage ideas from these fields based on graph regularizers to construct a robust framework for learning from labeled and unlabeled samples in multiple views that are nonindependent and include features that are inaccessible at the time the model would need to be applied. We describe examples of applications that fit this scenario, and we provide experimental results to demonstrate the effectiveness of knowledge carryover from trainingonly views. 1.
Bias Selection Using TaskTargeted Random Subspaces for Robust Application of GraphBased SemiSupervised Learning
"... Abstract—Graphs play a role in many semisupervised learning algorithms, where unlabeled samples are used to find useful structural properties in the data. Dimensionality reduction and regularization based on preserving smoothness over a graph are common in these settings, and they perform particula ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—Graphs play a role in many semisupervised learning algorithms, where unlabeled samples are used to find useful structural properties in the data. Dimensionality reduction and regularization based on preserving smoothness over a graph are common in these settings, and they perform particularly well if proximity in the original feature space closely reflects similarity in the classification problem of interest. However, many realworld problem spaces are overwhelmed by noise in the form of features that have no useful relevance to the concept that is being learned. This leads to a lack of robustness in these methods that limits their applicability to new domains. We present a graphconstruction method that uses a collection of taskspecific random subspaces to promote smoothness with respect to the problem of interest. Application of this method in a graphbased semisupervised setting demonstrates improvements in both the effectiveness and robustness of the learning algorithms in noisy problem domains. Keywordsapplications; graph Laplacian; semisupervised; I.