Results 1  10
of
15,874
Hyperparameter Learning for Graph Based . . .
 MACHINE SIMULATOR, THIRD INTERNATIONAL CONFERENCE ON COMPUTER ASSISTED LEARNING
, 1990
"... Semisupervised learning algorithms have been successfully applied in many applications with scarce labeled data, by utilizing the unlabeled data. One important category is graph based semisupervised learning algorithms, for which the performance depends considerably on the quality of the graph, or ..."
Abstract
 Add to MetaCart
, or its hyperparameters. In this paper, we deal with the less explored problem of learning the graphs. We propose a graph learning method for the harmonic energy minimization method; this is done by minimizing the leaveoneout prediction error on labeled data points. We use a gradient based method
Hyperparameter Learning in Robust Soft LVQ
 Proc. Of European Symposium on Artificial Neural Networks (ESANNâ€™2009
, 2009
"... Abstract. We present a technique to extend Robust Soft Learning Vector Quantization (RSLVQ). This algorithm is derived from an explicit cost function and follows the dynamics of a stochastic gradient ascent. The RSLVQ cost function involves a hyperparameter which is kept fixed during training. We pr ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. We present a technique to extend Robust Soft Learning Vector Quantization (RSLVQ). This algorithm is derived from an explicit cost function and follows the dynamics of a stochastic gradient ascent. The RSLVQ cost function involves a hyperparameter which is kept fixed during training. We
Efficient multiple hyperparameter learning for loglinear models
 in NIPS
, 2007
"... Using multiple regularization hyperparameters is an effective method for managing model complexity in problems where input features have varying amounts of noise. While algorithms for choosing multiple hyperparameters are often used in neural networks and support vector machines, they are not common ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
, they are not common in structured prediction tasks, such as sequence labeling or parsing. In this paper, we consider the problem of learning regularization hyperparameters for loglinear models, a class of probabilistic models for structured prediction tasks which includes conditional random fields (CRFs). Using
A majorizationminimization algorithm for (multiple) hyperparameter learning
"... We present a general Bayesian framework for hyperparameter tuning in L2regularized supervised learning models. Paradoxically, our algorithm works by first analytically integrating out the hyperparameters from the model. We find a local optimum of the resulting nonconvex optimization problem efficie ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
We present a general Bayesian framework for hyperparameter tuning in L2regularized supervised learning models. Paradoxically, our algorithm works by first analytically integrating out the hyperparameters from the model. We find a local optimum of the resulting nonconvex optimization problem
Hyperparameter Learning in Probabilistic Prototypebased models
"... We present two approaches to extend Robust Soft Learning Vector Quantization (RSLVQ). This algorithm for nearest prototype classification is derived from an explicit cost function and follows the dynamics of a stochastic gradient ascent. The RSLVQ cost function is defined in terms of a likelihood ra ..."
Abstract
 Add to MetaCart
classification approach. Experiments on artificial and real life data show that the hyperparameter crucially influences the performance of RSLVQ. However, it is not possible to estimate the best value from the data prior to learning. We show that the proposed variant of RSLVQ is very robust with respect
Robust Graph Hyperparameter Learning for Graph Based Semisupervised Classification
"... Abstract. Graphbased semisupervised learning has attracted much attention in recent years. Many successful methods rely on graph structure to propagate labels from labeled data to unlabeled data. Although graph structure affects the performance of the system, only few works address its constructi ..."
Abstract
 Add to MetaCart
construction problem. In this work, the graph structure is constructed by adjusting the hyperparameter controlling the connection weights between nodes. The idea is to learn the hyperparameters for graphs that yields low leaveoneout prediction error on labeled data while retaining high stability
Gaussian processes for machine learning
 in: Adaptive Computation and Machine Learning
, 2006
"... Abstract. We give a basic introduction to Gaussian Process regression models. We focus on understanding the role of the stochastic process and how it is used to define a distribution over functions. We present the simple equations for incorporating training data and examine how to learn the hyperpar ..."
Abstract

Cited by 631 (2 self)
 Add to MetaCart
the hyperparameters using the marginal likelihood. We explain the practical advantages of Gaussian Process and end with conclusions and a look at the current trends in GP work. Supervised learning in the form of regression (for continuous outputs) and classification (for discrete outputs) is an important constituent
Learning probabilistic relational models
 In IJCAI
, 1999
"... A large portion of realworld data is stored in commercial relational database systems. In contrast, most statistical learning methods work only with "flat " data representations. Thus, to apply these methods, we are forced to convert our data into a flat form, thereby losing much ..."
Abstract

Cited by 619 (31 self)
 Add to MetaCart
A large portion of realworld data is stored in commercial relational database systems. In contrast, most statistical learning methods work only with "flat " data representations. Thus, to apply these methods, we are forced to convert our data into a flat form, thereby losing much
SemiSupervised Learning Literature Survey
, 2006
"... We review the literature on semisupervised learning, which is an area in machine learning and more generally, artificial intelligence. There has been a whole
spectrum of interesting ideas on how to learn from both labeled and unlabeled data, i.e. semisupervised learning. This document is a chapter ..."
Abstract

Cited by 757 (8 self)
 Add to MetaCart
We review the literature on semisupervised learning, which is an area in machine learning and more generally, artificial intelligence. There has been a whole
spectrum of interesting ideas on how to learn from both labeled and unlabeled data, i.e. semisupervised learning. This document is a
The Infinite Hidden Markov Model
 Machine Learning
, 2002
"... We show that it is possible to extend hidden Markov models to have a countably infinite number of hidden states. By using the theory of Dirichlet processes we can implicitly integrate out the infinitely many transition parameters, leaving only three hyperparameters which can be learned from data. Th ..."
Abstract

Cited by 629 (41 self)
 Add to MetaCart
We show that it is possible to extend hidden Markov models to have a countably infinite number of hidden states. By using the theory of Dirichlet processes we can implicitly integrate out the infinitely many transition parameters, leaving only three hyperparameters which can be learned from data
Results 1  10
of
15,874